Logo

The Data Daily

Learn about Artificial Intelligence (AI)

Learn about Artificial Intelligence (AI)

Your browser is not supported. Please upgrade your browser to one of our supported browsers. You can try viewing the page, but expect functionality to be broken.

AI and Machine Learning impact our entire world, changing how we live and how we work. That's why it’s critical for all of us to understand this increasingly important technology, including not just how it’s designed and applied, but also its societal and ethical implications.

Join us to explore AI in a new video series, train AI for Oceans in 25+ languages, discuss ethics, and more!

Join some of our favorite AI experts for a panel discussion on Tuesday, December 8! We'll discuss some of the cool ways that AI is used, and touch on important issues like algorithmic bias and the future of work. Appropriate for students in grade 8 and up. Pair it with our AI for Oceans tutorial and our new AI & Ethics lesson plan for a great introduction to the ethics of artificial intelligence! Join us 10:30 - 11:15 am PST on December 8! Add it to your calendar, and then tune in on Zoom or stream on YouTube.Tune in Panelist: Mehran Sahami Professor and Associate Chair for Education in Computer Science, Stanford University

With an introduction by Microsoft CEO Satya Nadella, this series of short videos will introduce you to how artificial intelligence works and why it matters. Learn about neural networks, or how AI learns, and delve into issues like algorithmic bias and the ethics of AI decision-making.

Levels 2-4 use a pretrained model provided by the TensorFlow MobileNet project. A MobileNet model is a convolutional neural network that has been trained on ImageNet, a dataset of over 14 million images hand-annotated with words such as "balloon" or "strawberry". In order to customize this model with the labeled training data the student generates in this activity, we use a technique called Transfer Learning. Each image in the training dataset is fed to MobileNet, as pixels, to obtain a list of annotations that are most likely to apply to it. Then, for a new image, we feed it to MobileNet and compare its resulting list of annotations to those from the training dataset. We classify the new image with the same label (such as "fish" or "not fish") as the images from the training set with the most similar results.

Levels 6-8 use a Support-Vector Machine (SVM). We look at each component of the fish (such as eyes, mouth, body) and assemble all of the metadata for the components (such as number of teeth, body shape) into a vector of numbers for each fish. We use these vectors to train the SVM. Based on the training data, the SVM separates the "space" of all possible fish into two parts, which correspond to the classes we are trying to learn (such as "blue" or "not blue").

Images Powered by Shutterstock