- CLASSES
4/22 - Class 8
Class notes:
Recurrent Neural Networks - RNN
What is a RNN?
Vanilla neural networks and convolutional neural networks (what we have studied so far) accept a fixed sized vector as input (such as an image) and produce fixed sized vector outputs (such as probabilities of different classes). Recurrent nets allow us to operate over sequences of vectors - sequences in the input, the output, or generally both. This makes them great for applications such as text translation, text generation, image captioning, and sentiment analysis. Andrej Karpathy’s post The Unreasonable Effectiveness of Recurrent Neural Networks is a great resource to learn more about how they work.
Examples of RNN in action:
More:
- Four Experiments in Handwriting with a Neural Network
- Teaching Machines to Draw
- Sketch-RNN: A Generative Model for Vector Drawings
- Google quick draw dataset
- Memorization in RNNs
- Recurrent Net Dreams Up Fake Chinese Characters in Vector Format with TensorFlow
4/15 - Class 7
Class notes:
t-SNE allows for maintaining the neighbor relationships between points, hence the full name “T-distributed Stochastic Neighbor Embedding”. It can reduce large dimension datasets to any smaller size, but primarily is used for reductions to 2D and 3D for visualization.
Examples of t-SNE in action:
- MNIST 2D
- MNIST 3D
- Imagenet
- The Curator Table
- IDEO Font Map
- Embedding Projector - many options
- Infinite Drum Machine
- Bird Sounds
More on t-SNE:
- Distill.pub interactive guide to t-SNE
- t-SNE.js
- Kyle McDonald’s Audio Notebooks - great for feature extraction
- ml4a - Audio t-SNE
- ml4a - Wikipedia t-SNE
- ml4a - Image t-SNE
- Rasterfairy - turn your cloud into a grid
- Documentation for scikit-learn t-SNE
4/8 - Class 6
Midterm presentations
4/1 - Class 5
A guest presentation from Rebecca Ricks - a technologist, writer, an artist thinking about privacy and computational systems. Rebecca was a Ford-Mozilla open web fellow at Human Rights Watch, an organization that investigates and reports on abuses happening in all corners of the world and is now a researcher at the Mozilla Foundation. Her work interrogates the ways social platforms collect and monetize data about their users. Rebecca received her masters at NYU’s ITP program and holds a B.A. in Middle East Studies/Arabic.
3/18 - Class 4
Class notes:
Pose Estimation is a general problem in Computer Vision where we detect the position and orientation of an object. This usually means detecting keypoint locations that describe the object. (Source)
Some examples:
Pose estimation can be done for single or multiple people.
We can track face position, body position, and even use KNN classification to classify body positions - code. More reading:
- Real-time Human Pose Estimation in the Browser with TensorFlow.js
- Integrating Ml5.js PoseNet model with Three.js via Advait
- Introducing BodyPix: Real-time Person Segmentation in the Browser with TensorFlow.js
3/11 - Class 3
Class notes:
Transfer learning is the improvement of learning in a new task through the transfer of knowledge from a related task that has already been learned. (Source)
What is feature extraction?
- In a multi layer neural network, the final layer is known as the softmax
- It takes a list of numbers from the network and “squashes” them into probabilities
- The layer before this final layer is known as the logits, or feature vector
- This is the “fingerprint” of the image, and can be used to compare images
- Good explanation here
Retraining a network
- If we swap out the final layer, we can retrain it to classify what we want
Linear regression
- The simplest forms of machine learning
- Also known as “line fitting”
- Can be linear regression or polynomial
- Continuous output, as opposed to classification which is discrete
KNN
- “Tell me who your neighbor is, and I’ll tell you who you are”
- “K-Nearest Neighbor” is a machine learning algorithm used for both classification and regression. It is a “lazy learning” algorithm due to the fact that there is really is no learning at all. New data points are classified / valued according to a distance comparison with every data point in a training set. (source)[https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week3-classification-regression/README.md]
- Nice interactive demo here a bit further down the page
Examples using transfer learning:
- Teachable Machine
- Pacman code here
- Objectifier
- Getting Alexa to Respond to Sign Language Using Your Webcam and Tensorflowjs
Additional material:
- Nature of Code Part 2 “Intelligence and Learning” - week3-classification-regression - tons of good resources here
- ml4a - Classification with KNN
- ml4a - Linear Regression
- ml4a - Transfer Learning
2/25 - Class 2
Class notes:
- What is deep learning?
- What are neural networks?
- A brief history of neural networks
- Datasets
- Pre-trained models
- Getting started with ml5
- For now we can use the p5.js web editor - make an account to save your projects
- Github with code from the class
Additional material:
- A Brief History of Neural Networks
- Nature of Code Chapter 10: Neural Networks
- A Quick Introduction to Neural Networks
- Let’s Code a Neural Network from Scratch
- Nature of Code Class Week 4 - tons of more resources here
- ml5 image classifier using the p5js web editor
- Setting up a local server with Python
2/18 - Class 1
Class notes:
- What is machine learning?
- Artificial intelligence vs machine learning vs deep learning
- Community
- Tools
- Types of learning
- Supervised vs Unsupervised learning
- Reinforcement learning (see OpenAI gym)
- Classification vs regression
Use cases for Machine Learning
- Image classification
- Transfer learning
- Image regression
- Localization and detection
- Pose detection
- Style transfer
- Image translation
- General adversarial networks (GAN)
- Image to text
- Text Generation
- Dimensionality reduction
Additional relevant resources: