Caesar III
Oh Noes
Ja, richtig. Bei der schwächeren Konsole von beiden sind 85% Maximalleistung nicht schlimm. Vorallem nicht wenn man bedenkt, dass die Geräte angeblich schon mehrfach ausgereizt wurden.
Upscaling wird wohl noch mehr Standard.
Klingt soweit gut. Aber wie oben schon gesagt, frag ich mich, wo man diese Informationen speichern will? Speziell auf den Arcades.
Meine Bedenken werden also von "Spezialisten" geteilt.
Upscaling wird wohl noch mehr Standard.
http://www.popsci.com/gadgets/article/2010-01/exclusive-inside-microsofts-project-natalBut it's the software inside, which Microsoft casually refers to as “the brain,” that makes sense of the images captured by the camera. It's been programmed to analyze images, look for a basic human form, and identify about 30 essential parts, such as your head, torso, hips, knees, elbows, and thighs.
In programming this brain--a process that's still going on—Microsoft relies on an advancing field of artificial intelligence called machine learning.
The process is a lot like a parent pointing to many different people's hands and saying "hand," until a baby gradually figures out what hands looks like, how they can move, and that, for instance, they don't vanish into thin air when they're momentarily out of sight.
Klingt soweit gut. Aber wie oben schon gesagt, frag ich mich, wo man diese Informationen speichern will? Speziell auf den Arcades.
http://www.neogaf.com/forum/showpost.php?p=19197285&postcount=105I'm a computer hardware engineer and actually design hardware to test image quality and performance characterization of CMOS image sensors. PCB layout and FPGA design/programming.
It's my understanding, having discussed Natal with a few engineers including some Stanford researchers, that Natal's big design "thing" was a proprietary processing chip that would do the heavy lifting of interpreting structured IR light arrays. I'll briefly explain.
Instead of just interpreting static light in a room and the shapes deciphered within in using (now almost standard) vision processing algorithms, structured IR patterns are emitted from near the camera and how this pattern falls on a nearby surface is interpreted.
Ever seen an IR pattern emitted by a newer digital camera for AF? It's like this, only more advanced, and there are different patterns emitted. How the patterns are captured back by the CMOS image sensor with a IR bandpass filter behind the lens) are used to decipher the motion that the IR pattern is hitting.
This may be WAY off base. Just a discussion I had with some geeks. Perhaps Natal is a mixture of interpreting one structured IR pattern and the actual, visible light spectrum video together with some incredible algorithms.
Now, Microsoft didn't invent anything, they bought an Israel company that was working on a dedicated chip to interpret the patterns, so all the data wouldn't have to be shuffled over SHITTY USB 2.0 and bog down another processor. Instead, this chip was going to decipher the data into vectors, mass flow calculations, etc. and just transmit that.
The rumors I've heard from Israel are that the top talent of the company left after the purchase. Also, structured IR pattern recognition research is on-going at several Universities, including Stanford and I believe UIUC. Not sure if anything could be patented, except for UI bullshit crap that is entirely obvious once the cameras and technical hiccups are solved.
USB 2.0 is horrible for this on a PC because there is no DMA mechanism to efficiently get the data to the system (CPU oversees everything, no kernel mode drivers, inefficient hardware path, etc.), but it may be more efficient in the 360 hardware/firmware, but I'm not optimistic.
Anyway, as a repeat disclaimer, most of this is rumor or from a discussion or two I had with researchers, I have no direct knowledge of anything, and have NOT FOLLOWED any gaming world discussions of Natal. I am in a bubble, but some of this might be food for thought.
Meine Bedenken werden also von "Spezialisten" geteilt.
Zuletzt bearbeitet von einem Moderator: