Google two years ago launched Teachable Machine, a web experiment intended to elucidate machine learning concepts. It let any user with a webcam train an AI model to output specific media — an image, sound, speech, or GIF — corresponding with a hand gesture, object, or activity. Now Teachable Machine is expanding to incorporate inputs beyond those it initially supported, including audio. Additionally, it will allow folks to export their trained models to websites, apps, devices, and more.
Google says it worked with people across industries with different needs — like architect Steve Saling, who has amyotrophic lateral sclerosis (ALS) — to test and shape the new Teachable Machine. “People are using AI to explore all kinds of ideas — identifying the roots of bad traffic in Los Angeles, improving recycling rates in Singapore, and even experimenting with dance,” wrote the company in a blog post. “We collaborated with educators, artists, students, and makers of all kinds to figure out how to make [Teachable Machine] useful for them.”
“Our hope is that the new version of Teachable Machine will be a super easy way for anyone to train their machine learning models and use them in their own projects, wherever Tensorflow.js models can be run,” wrote Google.
Google is not the only one offering free tutorials designed to get intrepid practitioners up to speed on AI and machine learning basics. One recent example is a partnership between Amazon and Udacity to launch the DeepRacer Scholarship Challenge, a program that helps students create, train, and optimize AI models while receiving support from the community. Udacity previously launched a self-driving car nanodegree in partnership with big-name brands such as Mercedes-Benz.