How can a computer understand what is happening in a video?
The Royal Society

Wed 22/11/2017 from 18h30 to 19h30

Timezone : London (GMT+01:00)

The Royal Society
6-9 Carlton House Terrace
SW1Y 5AG London
United Kingdom
2017 Milner Award Lecture by Professor Andrew Zisserman FRS.
How can a computer understand what is happening in a video? How can a computer recognise people and what they are doing and saying in a video stream? The answer is by learning, and learning can take many different forms.
One form is known as 'strong supervision': this is when a computer is shown many (thousands) of examples of a person or the action they are doing, and from this it learns a model to classify the video. Another form of learning is known as 'weak' or 'self-supervision': this is when the computer learns directly from the structure of a video stream.
Join us to discover how both forms of supervision can be used to train neural networks using deep learning. It will be illustrated throughout with examples including: recognising people by their faces, recognising human actions, automated lip reading, and using both sound and images in concord for training.
Attending this event
• Free to attend
• No registration required
• Seats allocated on a first-come, first-served basis
• Doors open at 6pm
• Travel and accessibility information can be found on our website
• This event may be popular, and entry cannot be guaranteed
• The event features live subtitling
• The ceremony will be webcast live
For more information, please follow the link: