What technologies and components do you use for it, and why did you choose them?
We rely on computer vision cameras as the base input for the system. From there we used standard and proprietary image processing algorithms to create a high quality representation of reality.
This choice allows us to control the image quality we get by configuring an elaborate image processing pipeline to get good images. Additionally, it is more cost effective then using "full featured" Broadcast cameras.
What other applications can you see this technology being used for?
The ultimate application of the technology is to replace of what we know as the photographic and moving image medium. Think Star-Trek holodeck, or Minority Report's holo-memories.
We believe that by enabling photography to move from two dimensional to true three dimensional representation- We will experience in the future whole new forms of media experiences:
- Having dinner with your family, while being across the ocean
- Watching a sports event with your friends without being there
- Attending a Yoga lesson with a Yoga master, which all participants are from different parts of the globe
Just as it is hard to recollect how difficult it was 20 years ago to mitigate moving and still images between us, in comparison to our current digital high speed world; once the moving image becomes a true 3D representation, the next level of experiences may be enabled.
Do you have any other new exciting products or developments on the horizon?
Our next step in our development pipeline is to allow the home viewers themselves to move around freely within the captured scene. What is now reserved for an event's producers and directors will become the domain of every sports fan, which is allowing full interactive viewing around the scene.
View a press release on the system’s premier on Sunday Night Football.
Page 1 | Page 2