Machine Learning (ML) has emerged as a trending technology. However, there is a valid reason to converse about it when we consider 2020. And that is thanks to innovations, such as TensorFlow.js, which is an end-to-end open-source ML learning library that has an aptitude to, amid additional features, operating pre-trained AI straightforwardly in a web browser.
No doubt, we have already got instances of a range of web utilities using AI, such as speech recognition, image recognition, natural language processing, sentiment analysis, and more. Nevertheless, these utilities commonly offload the machine learning job to a server, stay for it to work out, and then carry out the outcomes.
The Key Point
Recently, I attempted to build a web application that, via a phone’s back-facing camera, was always on the watch out for a logo; the concept being that when the Artificial Intelligence distinguishes the logo, the website unlocks. Easy, right? However, even this apparently uncomplicated task destined to continuously captivating camera snapshots, as well as posting them to servers; thus that the AI might identify the logo.
The job had to be finished at a breakneck pace in order to prevent the logo to be missed when the phone of a user moved. This consequence in tens of kilobytes (kb) being loaded from the phone of a user every two seconds, which is a total misuse of bandwidth & an entire performance destroyer.
However, since TensorFlow.js introduces TensorFlow’s server-side Artificial Intelligence solution directly into the web, if I would have developed this project today, I could access an already trained form that allows the AI recognizes the given logo in the user’s mobile phone browser. No information upload required and recognition could obtain a couple of times/second, not a raw once every two seconds.
Less Latency, More Creativity
The more compound and exciting the ML application, the nearer to zero latency we require to be. Hence, with the latency-eliminating TensorFlow.js, AI’s artistic canvas unexpectedly widens, a bit wonderfully illustrated by the trials with Google’s proposal. Its human skeleton tracking & emoji scavenger pursue projects that show how app developers can attain much more imaginative when ML becomes an adequately incorporated fraction of the web.
The skeleton pathway is particularly fascinating. Not merely does it offer a reasonably priced choice to Microsoft Kinect, it even brings it straightforwardly onto the web. We could also walk as far as building a physical installation, which reacts to progress with the help of web technologies & a standard webcam.
On the other hand, the emoji scavenger hunt displays how mobile sites operating TensorFlow.js can unexpectedly turn out to be conscious about the phone’s user situation: what they see in front of them, where they are. Hence, it can contextualize the data showcased as a consequence.
It potentially has far-reaching educational implications too. Why? Since users will soon start to comprehend mobile sites more as “assistants” rather than “data providers.” It’s a drift that begun with Google Assistant & Siri-enabled mobile devices.
However, now due to web AI, this tendency to perceive mobiles as assistants is going to turn out to be utterly entrenched once sites – particularly mobile sites – start performing immediate machine learning. It could activate a societal transformation in perception, where users will anticipate sites to provide absolute significance for any given instant, but with negligible intervention and tutoring.