Jo pan to tam nevidi? Zajimave. Dal jsem tam cislo strany i kde presne to na te strane je:
"Maudit 7.11.2018 at 22:11"
„Finally, Figure 24 illustrates one of the challenging cases for multi frame image enhancement. In this case, a semi-transparent screen floats in front of a background that is moving differently. TAA tends to blindly follow the motion vectors of the moving object, blurring the detail on the screen. DLSS is able to recognize that changes in the scene are more complex and *combines the inputs in a more intelligent way that avoids the blurring issue*.“
https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf (strana 35 dole)
A je to bez te pomlcky mezi "multi" a "frame". Vyhledavani krasne funguje. Delame blbyho? Nevidim neslysim? :D
A kdyby to nestacilo:
"DLSS leverages a deep neural network to extract multidimensional features of the rendered scene and intelligently combine details from multiple frames to construct a high-quality final image."
https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/
Je vazne legracni, ze tu cast, ktera celou tu tvoji "konstrukci" totalne neguje, se ti jaksi zahadne nepovedlo najit :D
Za dalsi, od kdy absence neceho dokazuje opak? Zase tu zkousis logicke klamy typu: "protoze nelze dokazat, ze Buh neexistuje, tak musi jasne existovat"? To je non sequitur. Takova prasarnicka od naseho kristalove cistyho Oslana? O to horsi pro tebe je, ze de facto nejvyssi zastupce Nvidie o rozpoznavani objektu mluvi. A neni nic divneho, ze ta DNN neni ve whitepaperu vice popsana. Velmi pravdepodobne by totiz sama vydala na cely samostatny whitepaper. A zaroven je to konkurencni vyhoda. Neni tam ani jeji priblizna konfigurace.
Ta informace o rozpoznavani detailu ale k dispozici je, jak z keynote sefa Nvidie, tak v dalsich prezentacich. DLSS je convolutional autoencoder. Viz treba zde: https://youtu.be/QLMDX56-GSU?t=338
Autoencodery ve zpracovani obrazu se uci rozpoznavat objekty ve zdroji.
"Some architectures use stacked sparse autoencoder layers for image recognition. The first autoencoder might learn to encode easy features like corners, the second to analyze the first layer's output and then encode less local features like the tip of a nose, the third might encode a whole nose, etc., until the final autoencoder encodes the whole image into a code that matches (for example) the concept of 'cat'."
https://en.wikipedia.org/wiki/Autoencoder
Doslova - prvni layer autoencoderu se nauci rozponat hranu, dalsi spicku nosu, dalsi cely nos .. az na konci pozna, ze je na obraze kocka.
Takze abych to shrnul:
Ja jsem dodal odkazy na vyjadreni nekolika lidi primo z Nvidie, odkaz na jejich whitepaper, coz oboji potvrzuje, ze je ta DNN temporalni. Ono taky jak jinak chces sledovat "zmeny v obraze" - "DLSS is able to recognize that changes in the scene are more complex" - bez znalosti predchoziho stavu? ;) Dal jsem sem i odkaz s jasnym vysvetlenim, ze autoencoder pri rozpoznavani obrazu identifikuje detaily. Linknul jsem i vysvetleni z TechPowerUpu o tom samem, ktere je jasne popira, co tvrdis ty.
Ty zatim nic, stale cekam, kde je napsano cokoliv o tom, ze DLSS je spatial filtr a vyslovne napsano, ze tvrzeni Huanga na keynote o temporal DNN je nepravdive.
Novinarska etika se asi nekam vytratila. Porad nam tu budes okecavat tvoje totalni novinarske selhani. Tvoje reci o vykrucovani, uhybani a alternativnich faktech asi rad aplikujes jen na ostani, na tebe ne, vid? Double standard, hezky ..