Jak uz jsem tu kdysy psal, AI neni jen o neronovych sitich, ale jde ruku v ruce s preprocessingem obrazu, simulacemi nad vstupnimi i vystupnimi daty neuronovych siti, orientaci v rozpoznanem prostredi, atd. .. Takze Tesla je nejen rychlejsi, ale jeji klasicke compute schopnosti, vyuzivane v takovych scenarich, jsou navrch velmi uzitecne. To v Google TPU jaksi chybi.
Doufám, že si TPU i v této generaci zachová mnohem nižší cenu než konkurence.
https://www.extremetech.com/wp-content/uploads/2018/04/ResNet50-Cost.png
"As shown above, the current pricing of the Cloud TPU allows to train a model to 75.7 percent on ImageNet from scratch for $55 in less than 9 hours! Training to convergence at 76.4 percent costs $73. While the V100s perform similarly fast, the higher price and slower convergence of the implementation results in a considerably higher cost-to-solution."
To je dobrej vtip - slower implementation. Takze jednak je to srownani v jinych cloudech, na jinem hardwaru, ale jeste navic je na V100 pouzita pomalejsi implementace? ???? Takovy srovnani jsou vazne usmevny. Zvlast, kdyz Google musi vyrazne snizovat ceny, protoze oproti Azure a AWS ho skoro nikdo nepouziva.