Researches are going on and still much had to be done to make it more effective and efficient. But researchers from Yahoo Labs and Stanford University have developed an algorithm that can identify faces from various different angles, when part of the face is hidden including upside down and partially occluded . The new algorithm has been named Deep Dense Face Detector and was presented on Tuesday by the two researchers, Sachin Farfade and Mohammad Saberian. The algorithm for such type of detector system was built on the Viola-Jones algorithm which spots front-facing people in images by picking out key facial features such as a vertical nose and shadows around the eyes. At present the computers can only detect pretty straight faces, in fact, the systems cant even detect and identify the person or image if it is obscured or looking in various directions leave alone upside down. So to cope up with this issue Farfade and his team used a form of machine learning known as a deep convolutional neural network. This network involves training a computer to recognise elements of images from a database using various layers. Looking few months back, Google had used a similar technique for its recent GoogLeNet classification algorithm that can identify images within images, such as a dog wearing a cap sitting on the bench. Mr Farfade trained his algorithm using a database of 200,000 images featuring faces shown at various angles and orientations, plus 20 million images that didn’t contain faces. The researches describe that “face detection is getting attracted by many people day by day and to make it simpler and better, we had lot to do.” And, the team said the technology could be improved following further training. Since this is going to be a is good news for all of the cloud providers and social networks that trade in images and for businesses like Facebook, Instagram and Imgur. The Deep Face tool used by Facebook also uses a neural network technique to help recognise users in photos. Its algorithm identifies faces ‘as accurately as a human’ and uses a 3D model to virtually rotate faces so they are facing the camera. In the model, the team used a neural network that had been trained on a database of faces to try and match the face with one in a test database of more than 4 million images, containing more than 4,000 separate identities, each one labelled by humans. The Facebook Deep Face tool feature has now started appearing in the privacy settings of accounts during tagging. But the feature is not available to all and it is still in its infancy.