Event Debrief: Museum of Science and Technology

Spoiler alert: math can be cool.

In the real world, formulas and equations are more than just headaches that show up on a test paper. They can be leveraged to solve concrete problems and create cutting edge solutions. For those with an appetite for problem solving, proficiency in math can even provide lucrative lifelong careers in engineering, science or research fields. At IMRSV Data Labs we employ complex mathematical algorithms that allow computers to learn. Although interesting, this can be a difficult concept for anyone to wrap their head around; especially a group of young kids on a rainy Saturday afternoon. Nonetheless, on May 26th that was precisely what our team was trying to convey at the Museum of Science and Technology to a group of eager parents and children.

The event focused on educating the public about machine learning and the various ways it already effects our day to day lives. Dr. Isar Nejadgholi, our Head of Machine Learning, gave a presentation explaining this concept at a very elementary level, while the rest of the team ran a number of different interactive demonstrations.

The presentation also explained how machine learning software has been modeled after the neural networks that exist within the human brain. This anatomical component is what has separated humans from computers for the longest time. In our brain, we have hundreds of billions of neurons that are all connected and constantly updating. They send signals to each other in order to make sense of what we’re seeing, feeling, tasting, smelling, etc. This architecture inspired data scientists to do the same thing when coding computers; creating complex input output systems that can identify things in context with stellar accuracy. But just like a newborn baby, when a machine is first programmed it knows nothing. It has well established physical infrastructure and interior tools, but it needs to learn through experience and exposure to datasets before it can become “smart”. This is ultimately the crux of machine learning.



Following the presentation, we had museum go’ers engage in a number of different live demonstrations put on by members of the IMRSV team.  YOLO (You Only Look Once) is a real-time object detection system, based on an open source convolutional neural network (CNN). Unlike traditional sliding window classifiers, which take 1000s of predictions and aggregate the results to come to a conclusion, YOLO splits the image into a grid. It classifies what it sees per cell, then aggregates and thresholds the results to come to a conclusion. The crowd was able to test its capabilities by holding up various sample objects IMRSV had supplied, along with their own personal belongings.  

The Google AIY kits (Voice and Vision) and hackable Raspberry Pi based devices were meant to teach people about machine learning and computer vision. The audience was able to experiment with various objects by taking pictures and having the computer orally describe what it was looking at. To increase the competency of the AIY kits, our team combined an online image recognition program from Google and an online text-to-speech program from Amazon. Together, they allowed for more accurate classifications that could be verbally relayed to the crowd through a set of speakers.

Our third demo aimed to help reinforce the concept of training a machine through exposure and experience. Google’s Teachable Machine is a basic program that allows users to witness machine learning, in real time, using nothing but their camera. The audience was able to see how repeated exposure to an object or person allowed the computer to eventually recognize it with extremely high accuracy.  

Overall, the event was a tremendous success. We hope everyone involved enjoyed themselves and left slightly more educated! Below are some pictures from Saturday.

Simon Hicks