Face Id was a technology created by Apple in 2017 used on the iPhone X and present. The way the Face Id works is similar to the way the eye’s visual processing system takes in images and recognizes faces. Apple made this technology to replace the fingerprint and have an alternative to the passcode unlocking of the phone. Face Id uses a “True Depth Camera” which uses sensors and special projectors to allow your phone to unlock. The “True Depth Camera” has a FaceTime camera built in which takes a picture of the face. However, what makes this unique is the fact that it has a infrared emitter. An infrared emitter uses infrared radiation to create wavelengths that travel at a rate of 1050 nanometers per second. This same emitter is used in remote controls to allow you to control what is happening on your device or TV. The purpose of the infrared emitter in the face id is to project over 30,000 dots to map a visual of the face through the shape and structure of the face. Specifically the type of infrared emitter used is the “flood illuminator”, which shines infrared light at your face allowing recognition. The “flood illuminator” also is what allows your face to be recognized even if you are wearing glasses, a hat, or are using face id in the dark. The dots projected by the infrared emitter read the pattern (showing a face) by reflecting light off of certain depths. Another important side fact I noted, is that whenever you take a picture with your regular iPhone camera , the IR camera is taking a picture to recognize faces by the dot formations. Even on iPhones older than the X the IR camera categorizes people’s faces in the photos app. The question I have in my research about the Face Id technology is, where are the formations of these dots stored and how do they come together? In some ways I wonder if this process could be more accurate in the future if there is an area of storage in the iPhone where the collections of dots are stored.