GTC 2018 KEYNOTE
NVIDIA CEO Jensen Huang live
on March 27, 9:00 a.m. PT
https://www.nvidia.com/en-us/gtc/
Join Us at GTC 2018 the Premier AI Conference
"NVIDIA’s GPU Technology Conference (GTC) is the premier AI and deep learning event, providing you with training, insights, and direct access to experts from NVIDIA and other leading organizations.
See the latest breakthroughs in self-driving cars, smart cities, healthcare, big data, high performance computing, virtual reality and more.
The crowd more more than 8,000 surging into the McEnery Convention Center — which includes researchers, press, technologists, analysts and partners from all over the globe — is our largest yet."
The 600+ talks on the docket may be the best testament to the spread of GPUs into every aspect of human endeavor.
-
-
Rumors flying as GTC 2018 approaches...
NVIDIA's next-gen: GTX 11 series, not the GTX 20 series
Exclusive: NVIDIA won't name their next-gen cards as GTX 20 series, will instead be the GTX 11 series
By: Anthony Garreffa | Video Cards News | Posted: 8 hours, 36 mins ago
https://www.tweaktown.com/news/61326/nvidias-next-gen-gtx-11-series-20/index.html
"I've only been on the ground in San Jose for less than 24 hours, but I've had some interesting conversations with people in very high places that have confirmed with TweakTown exclusively that NVIDIA's next-gen graphics cards will NOT be launched as the GTX 20 series.
This means no GeForce GTX 2080 like we, and everyone else have been reporting, but instead the GeForce GTX 11 series. My source was quick to make me ponder, but couldn't answer the question in full: they said "it [the new cards] won't necessarily end in '70' and '80' like we're used to". That's all they would say on the matter.
NVIDIA could launch new graphics cards with names like GeForce GTX 1185 and GTX 1175 instead, differentiating them against the GTX 1070 and GTX 1080. I could see this happening, as it would be an interesting move for sure, and I really like the idea of a GTX 1185 for example.
It was only two months ago that I said NVIDIA would be launching their new GeForce GTX series at GTC 2018, where I said: "We could see the GTX 1180 and GTX 1170, which wouldn't surprise me".
We should hear more about NVIDIA's next-gen GPU architecture in just 24 hours when the GTC 2018 keynote kicks off by NVIDIA CEO himself, Jen-Hsun Huang."
Nvidia Turing reportedly to be named ‘GTX 11 Series’ – not GTX 20
What could this mean for the scope of Nvidia graphics cards?
By Joe Osborne 6 hours ago Graphics cards
https://www.techradar.com/news/nvidia-turing-reportedly-to-be-named-gtx-11-series-not-gtx-20
"Contrary to prior rumors and reports, Nvidia may give us a taste of its next line of gaming graphics cards at its GPU Technology Conference (GTC). TweakTown reports, citing an anonymous source, that the company will officially name its next set of graphics cards the ‘GTX 11 Series.’
Widely expected to be based on the in-progress Nvidia Turing hardware architecture, this naming convention would be a step to the left of what the technology world has anticipated the company to go with.
Reports and rumors leading up to this had pegged the next generation of Nvidia graphics cards to be known as the GTX 20 series.
Leaving room for further specialization?
Furthermore, TweakTown’s source reportedly said of the new product line that "it [the new cards] won't necessarily end in '70' and '80' like we're used to.”
This could see Nvidia applying different numerals to its product names from what it currently does, which starts at 50 (e.g. Nvidia GeForce GTX 1050) up through 80. TweakTown speculates that Nvidia could denote its products in increments of five (i.e. GTX 1185).
However, and this is speculation of our own, it’s possible that Nvidia simply wishes not to constrict itself within its own naming convention, leaving room for products closer in numeral so as to be geared toward specific, niche use cases – or to broaden the GTX name’s scope.
For instance, the successor to the Nvidia MX150 could be the Nvidia GeForce GTX 1140 and so forth, allowing Nvidia to introduce the GTX name to even lower-spec products.
At any rate, this report should only fuel the hype surrounding the next line of Nvidia graphics cards and GTC 2018, which kicks off on March 27."
RUMOR - NVIDIA's next-gen: GTX 11 series, not the GTX 20 series
https://www.reddit.com/r/nvidia/comments/87ayve/nvidias_nextgen_gtx_11_series_not_the_gtx_20/KY_BULLET, MickyD1234 and Vistar Shook like this. -
NVIDIA lists all GTX 10 series cards as 'out of stock'
By: Anthony Garreffa | More News: Video Cards | Posted: 4 hours, 59 mins ago
https://www.tweaktown.com/news/61336/nvidia-lists-gtx-10-series-cards-out-stock/index.html
"Remember when I exclusively reported that NVIDIA was set to launch a new GeForce GTX graphics card during GTC 2018? Well, if you wanted confirmation of this fact, it is here: NVIDIA has just placed 'out of stock' on the entire range of GTX 10 series graphics cards, as well as the TITAN Xp.
As you can see 'out of stock' dominates the official NVIDIA website, which means they will not be selling any more GTX 10 series cards effective immediately. The reason behind this would only be that NVIDIA themselves are completely sold out of GP10x GPUs and can't fulfil even their own orders for Founders Edition cards, or a next-gen release is imminent.
I also reported another exclusive earlier today that NVIDIA would release their next-gen GeForce GTX product as the GTX 11 series, and not the GTX 20 series that the world has expected. This news, along with the GTX 11 series naming and the GTC 2018 keynote that Jensen Huang kicks off tomorrow morning has every fiber in my nerd body freaking out."Vistar Shook likes this. -
https://www.ustream.tv/gpu-technology-conference
"GTC is the largest and most important event of the year for GPU developers.
GTC and the global GTC event series offer valuable training and a showcase of the most vital work in the computing industry today - including artificial intelligence and deep learning, virtual reality, and self-driving cars."
Watch The NVIDIA GTC 2018 Keynote – Live at 9 AM PT, March 27
By Hassan Mujtaba, 10 hours ago
https://wccftech.com/nvidia-gtc-2018-livestream/
" Those who are expecting gaming cards to launch during GTC may be in for a disappointment as the stage has never been used to present a Gaming or ‘GeForce’ lineup.
NV GeForce usually does their own gaming specific event which is kept in wraps until the launch is really close. So far, there has been no indication from NVIDIA or the rumor mill that a new graphics card for gamers might be approaching soon.
Nevertheless, expect this GTC to be as grand as ever with over 8000 attendees and over 600+ talks related to GPUs and their surrounding ecosystem."
Last edited: Mar 27, 2018 -
Vistar Shook Notebook Deity
NVIDIA Announces World's Most Powerful Professional GPU - The Quadro GV100
The 5,120 CUDA cores join the 32 GB of HBM2 memory and up to 7.4 TFLOPs of power for double-precision rendering, 14.8 TFLOPs for single-precision workloads, 29.6 TFLOPs half-precision, and 118.5 TFLOPs for deep learning through its Tensor Cores. The previous high-end Quadro, the GP100, offered 10.3 TFLOPs for single-precision rendering.hmscott likes this. -
https://www.nvidia.com/en-us/design-visualization/quadro-store/
Big Volta Comes to Quadro: NVIDIA Announces Quadro GV100
by Ryan Smith on March 27, 2018 1:30 PM EST
https://www.anandtech.com/show/12579/big-volta-comes-to-quadro-nvidia-announces-quadro-gv100
Along with today’s memory capacity bump for the existing Tesla V100 cards, NVIDIA is also rolling out a new Volta-based card for the Quadro family. Aptly named the Quadro GV100, this is the successor to last year’s Quadro GP100, and marks the introduction of the Volta architecture into the Quadro family.
As a consequence of NVIDIA’s GPU lines bifurcating between graphics and compute, in the last couple of years the Quadro family has been in an odd spot where it straddles the line between the two. Previously the king of all NVIDIA cards, instead the Quadro family itself has been bifurcated a bit, between the compute GPU-derrived cards like the Quadro GP100 and now GV100, and the more pure graphics cards like the P-series. The introduction of the Quadro GV100 in turn looks to maintain status quo here, delivering an even more powerful Quadro card with chart-topping graphics performance, but also the GV100’s GPU’s strong compute heritage.
NVIDIA Quadro Specification Comparison
GV100 GP100 P6000 M6000
CUDA Cores 5120 3584 3840 3072
Tensor Cores 640 N/A N/A N/A
Texture Units 320 224 240 192
ROPs 128 128 96 96
Boost Clock ~1450MHz ~1430MHz ~1560MHz ~1140MHz
Memory Clock 1.7Gbps HBM2 1.4Gbps HBM2 9Gbps GDDR5X 6.6Gbps GDDR5
Memory Bus Width 4096-bit 4096-bit 384-bit 384-bit
VRAM 32GB 16GB 24GB 24GB
ECC Full Full Partial Partial
Half Precision 29.6 TFLOPs? 21.5 TFLOPs N/A N/A
Single Precision 14.8 TFLOPs 10.3 TFLOPs 12 TFLOPs 7 TFLOPs
Double Precision 7.4 TFLOPs 5.2 TFLOPs 0.38 TFLOPs 0.22 TFLOPs
Tensor Performance 118.5 TLFOPs N/A N/A N/A
TDP 250W 235W 250W 250W
GPU GV100 GP100 GP102 GM200
Architecture Volta Pascal Pascal Maxwell 2
Manufacturing Process TSMC 12nm FFN TSMC 16nm TSMC 16nm TSMC 28nm
Launch Date March 2018 March 2017 October 2016 March 2016
While NVIDIA’s pre-brief announcement doesn’t mention whether the Quadro GP100 is being discontinued, the Quadro GV100 is none the less the de facto replacement for NVIDIA’s last current-generation Big Pascal card. The official specifications for the card put it at 14.8 TFLOPs of single precision performance, which works out to a fully-enabled GV100 GPU clocked at around 1.45GHz. This is only a hair below the mezzanine Tesla V100, and ahead of the PCIe variant. And like the capacity-bumped Tesla cards, the Quadro GV100 ships with 32GB of natively ECC-protected HBM2. This finally gets an NVIDIA professional visualization card to 32GB; the GP100 was limited to 16GB, and the Quadro P6000 tops out at 24GB.
On the features front, the card also ships with NVIDIA’s tensor cores fully enabled, with performance again in the ballpark of the Tesla V100. Like the Quadro GP100’s compute features, the tensor cores aren’t expected to be applicable to all situations, but there are some professional visualization scenarios where NVIDIA expects it to be of value. More importantly though, the Quadro GV100 continues the new tradition of shipping with 2 NVLink connectors, meaning a pair of the cards can be installed in a system and enjoy the full benefits of the interface, particularly low latency data transfers, remote memory access, and memory pooling.
At a high level, the Quadro GV100 should easily be the fastest Quadro card, a distinction the GP100 didn’t always hold versus its pure graphics siblings, and that alone will undoubtedly move cards. As we’ve already seen with the Titan V in the prosumer space – NVIDIA dodging expectations by releasing the prosumer Volta card first and ProViz card second – the Titan V can be a good deal faster than any of the Pascal cards assuming that software is either designed to take advantage of the architecture, or at least meshes well with NVIDIA’s architectural upgrades. Among other things, NVIDIA is once again big into virtual reality this year, so the GV100 just became their flagship VR card, a convenient timing for anyone looking for a fast card to drive the just-launched HTC Vive Pro.
However the GV100’s bigger calling within NVIDIA’s ecosystem is that it’s now the only Quadro card using the Volta architecture, meaning it’s the only card to support hardware raytracing acceleration, vis a vie NVIDIA’s RTX technology.
Announced last week at the 2018 Game Developers Conference, RTX is NVIDIA’s somewhat ill-defined hardware acceleration backend for real-time raytracing. And while the GDC announcement was focused on the technology’s use in games and game development, at GTC the company is focusing on the professional uses of it, including yet more game development, but also professional media creation. Not that NVIDIA expects movie producers to sudden do final production in real-time on GPUs, but as with the game asset creation scenario, the idea is to significantly improve realism during pre-production by giving artists a better idea of what a final scene would look like.
Along with Microsoft’s new DirectX Raytracing API, the RTX hardware will also be available within NVIDIA’s OptiX ray tracing engine – which is almost certainly a better fit for ProViz users – while NVIDIA is also saying that Vulkan support is on tap for the future. And like the game development scenario, NVIDIA will also be looking to leverage their tensor cores here as well in order to use them for AI denoising. Which, given the still limited raytracing performance of current hardware, is increasingly being setup as the critical component for making real-time ray tracing viable in 2018.
Otherwise, the Quadro GV100 looks to be a fairly standard Quadro card. TDP has gone up ever slightly from the Quadro GP100 – from 235W to 250W – so while it should generally be drop-in replaceable, it’s not strictly identical. Nor are the display outputs identical; the Quadro GV100 has dropped the GP100’s sole DVI port, leaving it with a pure 4x DisplayPort 1.4 setup. The card also features the standard Quadro Sync and Stereo connectors for synchronized refresh and quad-buffered stereo respectively.
Wrapping things up, the Quadro GV100 is shipping immediately from NVIDIA, and OEMs will begin including it in their systems in June. Official pricing has not been announced, but like the GP100 before it, I would expect this card to run for north of $5,000. (it's $8,999 !!)
https://nvidianews.nvidia.com/news/...eal-time-ray-tracing?nvid=nv-int-qrg0nt-35251
"GPU Technology Conference — NVIDIA today announced the NVIDIA® Quadro® GV100 GPU with NVIDIA RTX™ technology, delivering for the first time real-time ray tracing to millions of artists and designers.
The biggest advance in computer graphics since the introduction of programmable shaders nearly two decades ago, NVIDIA RTX — when combined with the powerful Quadro GV100 GPU — makes computationally intensive ray tracing possible in real time when running professional design and content creation applications.
Media and entertainment professionals can see and interact with their creations with correct light and shadows, and do complex renders up to 10x faster than with a CPU alone. Product designers and architects can create interactive, photoreal visualizations of massive 3D models — all in real time.
“NVIDIA has reinvented the workstation by taking ray-tracing technology optimized for our Volta architecture, and marrying it with the highest-performance hardware ever put in a workstation,” said Bob Pette, vice president of Professional Visualization at NVIDIA. “Artists and designers can simulate and interact with their creations in ways never before possible, which will fundamentally change workflows across many industries.”
NVIDIA RTX technology was introduced last week at the annual Game Developers Conference. Today NVIDIA announced that it is supported by more than two dozen of the world’s leading professional design and creative applications with a combined user base of more than 25 million customers.
The Quadro GV100 GPU, with 32GB of memory, scalable to 64GB with multiple Quadro GPUs using NVIDIA NVLink™ interconnect technology, is the highest-performance platform available for these applications. Based on NVIDIA’s Volta GPU architecture, the GV100 packs 7.4 teraflops of double-precision, 14.8 teraflops of single-precision and 118.5 teraflops of deep learning performance. And the NVIDIA OptiX™ AI-denoiser built into NVIDIA RTX delivers almost 100x the performance of CPUs for real-time, noise-free rendering.
Additional Benefits
Other benefits of Quadro GV100 with NVIDIA RTX technology include:
Easy implementation though a variety of APIs — Developers can access NVIDIA RTX technology through the NVIDIA OptiX application programming interface, Microsoft’s new DirectX Raytracing API and, in the future, Vulkan, an open, cross-platform graphics standard. All three APIs have a common shader programming model that allows developers to support multiple platforms.
Life-like lighting, reflections and shadows using real-world light and physical properties — GV100 and NVIDIA RTX ray-tracing technology deliver unprecedented speed of cinematic-quality renderings.
Supercharged rendering performance with AI — OptiX AI-accelerated denoising performance for ray tracing provides fluid visual interactivity throughout the design process.
Highly scalable performance — Fast double-precision coupled with the ability to scale memory up to 64GB using NVLink to render large complex models with ease.
Ability to collaborate, design, create in immersive VR — VR ready with the maximum graphics and compute performance available means designers can use physics-based, immersive VR platforms to conduct design reviews and explore photoreal scenes and products at scale.
Broad Support from Software Developers
A broad range of software developers are showing strong support for GV100 and real-time ray tracing:
“We are using the NVIDIA RTX OptiX AI denoiser to bring workflow enhancements to the Arnold renderer and look forward to getting it into the hands of our customers working in animation and visual effects production.” — Chris Vienneau, senior director of Media & Entertainment Product at Autodesk
“The availability of NVIDIA RTX opens the door to make real-time ray tracing a reality. By making such powerful technology available to the game development community with the support of the new DirectX Raytracing API, NVIDIA is the driving force behind the next generation of game and movie graphics.” — Kim Libreri, chief technology officer at Epic Games
“With NVIDIA GV100 GPUs and RTX, we can now do real-time ray tracing. It’s just fantastic!” — Sébastien Guichou, CTO at Isotropix
“We use powerful NVIDIA GPU technologies like the new Quadro GV100 to accelerate our simulation applications and algorithms, and NVIDIA OptiX for fast, AI-based rendering. We’re excited about the potential NVIDIA RTX ray-tracing technology holds to deliver more lifelike images faster than ever.” — Jacques Delacour, CEO and founder of OPTIS
“The new Quadro GV100 with RTX technology delivers unprecedented real-time ray-tracing performance, helping our customers to be first to market, gaining hundreds of thousands of dollars over their competition each year.” — Brian Hillner, SOLIDWORKS Visualize Product Portfolio Manager
Availability
The Quadro GV100 GPU is available now on nvidia.com, and starting in April from leading workstation manufacturers, including Dell EMC, HP, Lenovo and Fujitsu, and authorized distribution partners, including PNY Technologies in North America and Europe, ELSA/Ryoyo in Japan and Leadtek in Asia Pacific.
Learn more about the benefits of the Quadro GV100 for deep learning and simulation."
https://www.nvidia.com/en-us/data-center/dgx-2/
NVIDIA’s DGX-2: Sixteen Tesla V100s, 30 TB of NVMe, only $400K
by Ian Cutress on March 27, 2018 2:00 PM EST
https://www.anandtech.com/show/12587/nvidias-dgx2-sixteen-v100-gpus-30-tb-of-nvme-only-400k
Ever wondered why the consumer GPU market is not getting much love from NVIDIA’s Volta architecture yet? This is a minefield of a question, nuanced by many different viewpoints and angles – even asking the question will poke the proverbial hornet nest inside my own mind of different possibilities. Here is one angle to consider: NVIDIA is currently loving the data center, and the deep learning market, and making money hand-over-fist. The Volta architecture, with CUDA Tensor cores, is unleashing high performance to these markets, and the customers are willing to pay for it. So introduce the latest monster from NVIDIA: the DGX-2.
DGX-2 builds upon DGX-1 in several ways. Firstly, it introduces NVIDIA’s new NVSwitch, enabling 300 GB/s chip-to-chip communication at 12 times the speed of PCIe. This, with NVLink2, enables sixteen GPUs to be grouped together in a single system, for a total bandwidth going beyond 14 TB/s. Add in a pair of Xeon CPUs, 1.5 TB of memory, and 30 TB of NVMe storage, and we get a system that consumes 10 kW, weighs 350 lbs, but offers easily double the performance of the DGX-1. NVIDIA likes to tout that this means it offers a total of ~2 PFLOPs of compute performance in a single system, when using the tensor cores.
Cost $399,000
NVIDIA’s overall topology relies on a dual stacked system. The high level concept photo provided indicates that there are actually 12 NVSwitches (216 ports) in the system in order to maximize the amount of bandwidth available between the GPUs. With 6 ports per Tesla V100 GPU, each running in the larger 32GB of HBM2 configuration, this means that the Teslas alone would be taking up 96 of those ports if NVIDIA has them fully wired up to maximize individual GPU bandwidth within the topology.
AlexNET, the network that 'started' the latest machine learning revolution, now takes 18 minutes
Notably here, the topology of the DGX-2 means that all 16 GPUs are able to pool their memory into a unified memory space, though with the usual tradeoffs involved if going off-chip. Not unlike the Tesla V100 memory capacity increase then, one of NVIDIA’s goals here is to build a system that can keep in-memory workloads that would be too large for an 8 GPU cluster. Providing one such example, NVIDIA is saying that the DGX-2 is able to complete the training process for FAIRSEQ – a neural network model for language translation – 10x faster than a DGX-1 system, bringing it down to less than two days total rather than 15.
Otherwise, similar to its DGX-1 counterpart, the DGX-2 is designed to be a powerful server in its own right. Exact specifications are still TBD, but NVIDIA has already told us that it’s based around a pair of Xeon Platinum CPUs, which in turn can be paired with up to 1.5TB of RAM. On the storage side the DGX-2 comes with 30TB of NVMe-based solid state storage, which can be further expanded to 60TB. And for clustering or further inter-system communications, it also offers InfiniBand and 100GigE connectivity, up to eight of them.
The new NVSwitches means that the PCIe lanes of the CPUs can be redirected elsewhere, most notably towards storage and networking connectivity.
Ultimately the DGX-2 is being pitched at an even higher-end segment of the deep-learning market than the DGX-1 is. Pricing for the system runs at $400k, rather than the $150k for the original DGX-1. For more than double the money, the user gets Xeon Platinums (rather than v4), double the V100 GPUs each with double the HBM2, triple the DRAM, and 15x the NVMe storage by default.
NVIDIA has stated that DGX-2 is already certified for the major cloud providers."Last edited: Mar 29, 2018Vistar Shook likes this. -
Still waiting for Nvidia to post the Keynote to their Youtube channel...
Update: now the Ustream link has the full presentation for replay:
https://www.ustream.tv/gpu-technology-conferenceLast edited: Mar 27, 2018 -
NVidia's $400,000 "Graphics Card" - 16 GPUs, 1.5TB RAM DGX-2
Published on Mar 27, 2018
We get hands-on -- well, hands-off (they wouldn't let us touch it) -- with the NVIDIA DGX-2, a $400,000 super computer with 16 Volta GPUs.
Article: https://www.gamersnexus.net/news-pc/3270-nvidia-tesla-v100-accelerator-32gb-hbm2
While at GTC 2018, we checked out the new $400,000 DGX-2 data center solution to deep learning and machine learning. The DGX-2 moves the V100 from 16GB to 32GB of RAM. Speaking of, there's 1.5TB of system memory in this box and 512GB of HBM2. Fair to say, between the HBM and system memory, half the cost is probably RAM! -
Nvidia stock dropped $18.96 (7.76%) today.
Nvidia halts self-driving tests in wake of Uber accident
MARCH 27, 2018 / 9:23 AM / UPDATED AN HOUR AGO
https://www.reuters.com/article/us-...-tests-in-wake-of-uber-accident-idUSKBN1H32E0
"(Reuters) - Chipmaker Nvidia Corp said on Tuesday it has suspended self-driving tests across the globe, a week after an Uber Technologies Inc autonomous vehicle struck and killed a woman crossing a street in Arizona.
Uber is one of Nvidia’s partners and uses some of its self-driving technology. Nvidia’s shares closed down 7.8 percent at $225.52, wiping out more than $11 billion in market value.
Nvidia shares have more than doubled in value in the past 12 months on bets that the company will become a leader in chips for driverless cars, data centers and artificial intelligence.
Uber suspended North American tests of its autonomous vehicles after the fatal collision on March 18 in Tempe, Arizona. And Arizona on Monday suspended permission for Uber to test self-driving vehicles on public streets."
Nvidia shares suffer third-worst day since 2011 as it pulls test cars off roads after pedestrian death in Uber collision
Published: Mar 27, 2018 5:34 p.m. ET
https://www.marketwatch.com/story/n...alt-overshadows-annual-celebration-2018-03-27
"Nvidia Corp. shares suffered their third-worst day since 2011 on Tuesday, as the company’s decision to halt testing of self-driving cars overshadowed new products announced at its annual developers conference.
Nvidia NVDA, -7.76% stock closed with a 7.8% drop at $225.52, with the decline starting after Reuters reported that the company had halted self-driving tests after an Uber Technologies Inc. test car killed a pedestrian in a collision last week. The report arrived just as Chief Executive Jensen Huang took the stage in San Jose, Calif., for his keynote at the chip maker’s GPU Technology Conference, in which he explained where Nvidia is taking its advances in artificial intelligence.
Huang did not bring up the self-driving halt in his keynote, but did address it in a subsequent session with the media and at an Investor Day event with analysts Tuesday. In both sessions, Huang stressed that stopping the testing was strictly cautionary in case the investigations into the pedestrian death show an issue beyond Uber.
“The reason we suspended was actually very simple — obviously there’s a new data point as a result of the accident last week,” Huang said in a Q&A session with the press. “As engineers, we should wait to see if we learn something from that experience.”
“Until I learn something from this, let’s pretend there is something to learn,” he told analysts at the Investor Day presentation, presenting it as an “abundance of caution.”
Nvidia has ridden hopes for big gains from AI advances to huge stock gains in the past couple of years, as the chip maker has rolled out new products aiming to take advantage of graphics chips’ usefulness in accelerating machine learning. Automotive revenue has not jumped like Nvidia’s data-center business, however, and the testing delay could push it back more, said RBC Capital Markets analyst Mitch Steves.
“While it is unclear if the timeline will be pushed out (2020 automotive story), the pause will likely cause a reassessment of timelines,” Steves, who maintained an outperform rating and $285 price target, wrote in a note Tuesday.
Opinion: New mining chip could erode AMD, Nvidia graphics-chip sales
One of the new technologies Huang showed off Tuesday was a self-driving simulator, a virtual environment that would allow companies to evaluate and train their autonomous-driving technologies in an environment without the dangers of real-world testing. Nvidia executives claim that the virtual environment can simulate bad weather, dangerous conditions and other situations that have caused trouble for autonomous vehicles, though they said it would not completely preclude the need for real-world testing.
“I think this will enable the acceleration of moving through that process to get to the point where we’ll be able to continue to augment with actual testing, but to be able to have additional software developed in a faster amount of time,” Danny Shapiro, Nvidia’s senior director of automotive products, said in response to a question from MarketWatch. “So, we’re not saying that it will necessarily totally replace it, but [provide] a lot more simulated miles.”
Even after Tuesday’s decline, Nvidia stock has gained more than 108% in the past year, as the S&P 500 index SPX, -1.73% has increased 13.5%."
Several of the market's favorite technology stocks tanked as investors grew concerned over the companies' growth markets following new developments Tuesday.
Nvidia, the best performing chip stock in the S&P 500 over the past year, stumbled down 8 percent after the chipmaker announced it would suspend self-driving tests.
By Tae Kim @firstadopter, Updated 3 Hours Ago
https://www.cnbc.com/2018/03/27/pop...t-smoked-as-investors-fear-tech-backlash.html
The tech sector sell-off is not just about Facebook's data scandal anymore.
Several of the market's favorite technology stocks tanked as investors grew concerned over the companies' ambitious growth following new developments Tuesday. The NYSE FANG index fell 5.6 percent on Tuesday and is down 6 percent over the last week, its worst showing since the index began in 2014.
The so-called FANG stocks all dropped, with Facebook down 5 percent, Amazon off by 4 percent, Netflix falling 6 percent and Google-parent Alphabet lower by 4.5 percent.
And the pain was shared in other tech stocks. Twitter declined 12 percent after a famed short seller issued a negative report, predicting more regulation over its social media platform after Facebook's controversy.
Nvidia, the best performing chip stock in the S&P 500 over the past year, stumbled down 8 percent after the chipmaker announced it would suspend self-driving tests. Artificial intelligence and autonomous driving are two of the company's key promising markets.
And Tesla shares fell 8 percent after the National Transportation Safety Board said it sent investigators to look into a fatal car crash last week in California, according to a post on social media.
The steep drop in these popular technology stocks weighed on the sector's leaders.
The PowerShares QQQ Trust (QQQ), which tracks the Nasdaq 100 index, fell 3.2 percent on heavy volume. The fund traded more than 75 million shares Tuesday, well above its 30-day volume average of 46.6 million
Facebook and Nvida weren't alone in the tech stock drop party today:
Last edited: Mar 27, 2018Vistar Shook likes this. -
Deploy HPC Applications Faster with NVIDIA GPU Cloud
Published on Mar 26, 2018
Hear NVIDIA Solutions Architect Matthew Jones demonstrate how to seamlessly pull and deploy an HPC application container from NVIDIA GPU Cloud. https://nvda.ws/2IQndjL
Comments...
DAVIDBLADE94 1 hour ago
Nvidia come on please i need that GTX 2080 please i have 750€ ready to pay you, if you give me this fabulous graphic card <3 please NVidia(((((((( i have a gtx 970 with a 4k monitor, they are light years back , but efficiency remains epic (Y) this crazy miners put the GTX 1080ti up to 1200 € i need you help
Walt Kowalski 5 hours ago
drop prices assholes
Xeon Gaming 8 hours ago
fix GPU drivers !!! AMD HAWE BETTER DRIVERS !!!
Veselin Petrov 15 hours ago (edited)
Nvidia im still waiting for my GTX.Hurry up guys!
StonedRambo01 1 day ago
How about deploying gpu's faster??? -
What’s Happening at GTC 2018
Published on Mar 27, 2018
Check out the latest and greatest from this year’s NVIDIA GPU Technology Conference, the world’s premier AI event. All the highlights are featured here: http://www.GPU-tech-conf.com
I Am AI: GTC 2018 kickoff video
Published on Mar 27, 2018
The GPU Technology Conference 2018 keynote kicked off showing the many ways AI is changing our lives.
NVIDIA Volta in the Cloud at GTC 2018
Published on Mar 27, 2018
At GTC 2018, learn how AWS was first to the cloud with NVIDIA Tesla V100. AWS and NVIDIA are partnering to revolutionize industries and companies by enabling developers to build powerful machine learning applications on the most advanced, highest-performing GPU-accelerated cloud infrastructure.
-
NVIDIA GTC 2018 supercut
Published on Mar 28, 2018
NVIDIA's President and CEO, Jensen Huang, kicks off their GTC technology conference in Silicon Valley today with a keynote. Here it is in under 15 minutes.
Last edited: Mar 28, 2018 -
Gone in 60.121 seconds: Your guide to the pricey new gear Nvidia teased at its annual GPU fest
Yours if you can afford it... and wait long for the fabs to make the chips
By Katyanna Quach 28 Mar 2018 at 07:29
https://www.theregister.co.uk/2018/03/28/nvidia_gtc_roundup/
" GTC Nvidia CEO Jensen Huang flaunted a bunch of stuff, from bigger boxes of graphics chips to robot simulators, at its 2018 GPU Technology Conference (GTC) in Silicon Valley on Tuesday.
Here's a quick summary of what went down, minus the fluff.
- Huang acknowledged in a Q&A session that there is indeed a shortage right now of top-end Nv GPUs, mainly due to people grabbing them for cryptocurrencies and blockchain ledgers. Nvidia simply can't make enough Tesla chips to go around, partly due to demand and partly due to yield, we reckon. Thus far, the chip designer's only solution is: make and ship as many GPUs as possible. The biz wants to concentrate on giving hardware to cloud and supercomputer builders, gamers, graphics artists, scientists, engineers, enterprises... anyone but those annoying crypto-kids.
- If you've got roughly a million dollars to blow on deep-learning research, Pure Storage and Nvidia have produced a hyperconverged stack of flash memory and flagship Tesla Volta GV100 GPUs, plus some extra bits and pieces, called AIRI. Assuming the GPUs are available, natch.
- Programmers, engineers, and other techies dreaming of creating robots that sport some sort of machine intelligence can ask nicely to check out Isaac: this is a forthcoming software development kit and simulator, with libraries, drivers and other tools, for designing, testing and building machine-learning-based robotic gizmos.
- Speaking of programmers, TensorRT 4 – a GPU-accelerated deep-learning inference framework – has landed. Nvidia and Google boffins have integrated TensorRT into TensorFlow 1.7, if you prefer to use that engine for AI coding.
- Nvidia is stepping into the world of networking with the NVSwitch, a switch for its high-speed NVLink interconnect. Meanwhile, Tesla V100 GPUs – Nvidia's top of the line chips for data centers – are now available with 32GB of HBM2 memory rather than the usual 16. Tying it all together is the new DGX-2 workstation, which has 16 32GB V100s connected via 12 NVSwitches for 2.4TB/s of bisection bandwidth and 512GB total HBM2 memory. The box has 1.5TB of system memory, two Intel Xeon CPUs, 30TB of flash storage, and Infiniband, 100GbE, and 10/25GbE interfaces. Nvidia claims it can hit as high as 2 PTLOPS with mixed-precision floating-point math. Yours for $400,000. This gear builds upon the $150,000 DGX-1 and DGX Station previously launched.
- Nvidia has jumped aboard Project Trillium, Arm's effort to cram AI inference processing into chips powering wearables, gadgets, and Internet-of-Things devices. This is aimed at silicon designers: Nvidia is offering NVDLA as a free and open architecture for building deep-learning accelerators into hardware.
- If you're working on self-driving cars – and who isn't – you may or may not be tempted by Nvidia's new Drive Constellation, a stack of boxes for simulating autonomous vehicle control software without crashing robo-rides or killing people (right, Uber?) Don't hold your breath – these GPU-accelerated machines won't arrive until the third quarter of 2018 at the earliest, and that's only for Nv's favorite customers.
- Scientists, engineers and artists needing some serious fire power for simulations and rendering are offered the new Quadro GV100 with 32GB of HBM2 memory. Each offers up to 7.4 TFLOPS of double-precision floating-point math performance, or 14.8 TFLOPS with single-precision, and can be linked via an NVLink interconnect to form one GPU, doubling the maximum potential performance and HBM2 capacity. These should be available now direct from Nvidia, or from suppliers next month.
- For those interested in virtualized GPUs in the cloud, there's a bunch of announcements here. And if you're a virtual reality designer or developer, there's stuff to tease you here.
Comments -
GTC 2018 Keynote with NVIDIA CEO Jensen Huang
Published on Mar 28, 2018
Watch a replay of NVIDIA CEO Jensen Huang's keynote address at the GPU Technology Conference 2018 in Silicon Valley, where he unveiled a series of advances to NVIDIA’s deep learning computing platform that deliver a 10x performance boost on deep learning workloads; launched the Quadro GV100 GPU, transforming workstations with 118.5 TFLOPS of deep learning performance; introduced NVIDIA DRIVE Constellation to run self-driving car systems for billions of simulated miles, and much more.
Jump ahead to:
2:38 - NVIDIA founder and CEO Jensen Huang takes the stage
3:53 - Computer graphics and NVIDIA RTX ray-tracing technology
11:40 - Epic Games' Star Wars themed "Reflections" demo, with real-time ray tracing
19:06 - Announcing Quadro GV100 with NVIDIA RTX
27:54 - Rise of GPU computing - science needs supercharged computers
38:17 - Project Clara medical imaging supercomputer demo
48:24 - Introducing Tesla V100 GPU with 32GB of HBM2 memory
54:30 - Introducing the world's largest GPU, NVIDIA DGX-2 with NVSwitch
1:08:55 - Advancing the software stack for AI inferencing with TensorRT 4, NVIDIA GPU Cloud
1:23:20 - Kubernetes on NVIDIA GPUs
1:33:45 - NVIDIA Research conditional GANs demo
1:37:56 - Future of autonomous vehicles - data is the new source code
1:49:25 - Self-driving car technology demo, NVIDIA DRIVE roadmap with Orin
2:00:31 - VR simulated training demo, announcing NVIDIA DRIVE Constellation and DRIVE Sim
2:11:18 - Robotics boosts every industry, introducing SDK for NVIDIA Isaac robotics platform
2:14:45 - Remote control of autonomous car in VR using NVIDIA Holodeck
2:22:17 - The GPU computing revolution continues
GTC 2018 NVIDIA News Recap
Published on Mar 28, 2018
Watch this summary of NVIDIA's key announcements from the keynote at the GPU Technology Conference 2018 in Silicon Valley, featuring updates in computer graphics, AI computing, robotics, VR, and more.
Read more details: https://nvda.ws/2J1RHQ2
NVIDIA DRIVE—GTC 2018 Demonstration
Published on Mar 28, 2018
At this year’s GPU Technology Conference, NVIDIA illustrated some of its AI building blocks for self-driving cars and an autonomous drive through city streets and onto a highway.
I am AI Docuseries, Episode 6: Running Wild for Nature Conservation - Wildbook
Published on Mar 28, 2018
Wildbook, an AI-powered software developed by researchers, is helping the Kenyan government track zebra movement to save them from extinction.
NVIDIA Isaac platform for robotics at GTC 2018
Published on Mar 28, 2018
The robotics revolution shifted into high gear at this year’s NVIDIA GPU Technology Conference. Watch to learn how to accelerate the development and deployment of robotics with the NVIDIA Isaac platform. The new Isaac software development kit (SDK) is a collection of libraries, drivers, APIs, and other tools that will save manufacturers, researchers, startups, and developers hundreds of hours by making it easy to add AI into next-generation robots for perception, navigation, and manipulation.
Read the blog here: https://nvda.ws/2J31TaY
Last edited: Mar 28, 2018 -
Discover the World’s Largest GPU: NVIDIA DGX-2
Published on Mar 28, 2018
Watch to learn how we’ve created the first 2 petaFLOPS deep learning system, using NVIDIA NVSwitch to combine the power of 16 V100 GPUs for 10X the deep learning performance.
Visual Tour of GTC 2018
Published on Mar 28, 2018
See what's going on at our GPU Technology Conference, the premier AI and deep learning event in Silicon Valley. Read about what we announced during this event: https://nvda.ws/2GzqVjD
GTC 2018: Inception Awards Winners Recap
Published on Mar 28, 2018
In this year’s NVIDIA Inception Program, more than 200 AI startups applied to compete in three categories—healthcare, enterprise, and autonomous systems—for a chance to win part of a $1 million prize pool. The top twelve semi-finalists participated in Inception Pitch Day to earn one of the six finalist spots at our annual GPU Technology Conference. Watch the winning pitches from Subtle Medical, AiFi, and Kinema Systems, delivered to a high-profile panel of judges from Goldman Sachs, Fidelity Investments, and Coatue Management. Read the blog here: https://nvda.ws/2GEKQxs
VMware powers Virtualization at GPU Technology Conference
Published on Mar 28, 2018
In this video from from 2018 GPU Technology Conference, Ziv Kalmanovich from VMware and Fred Devoir from NVIDIA describe how they are working together to bring the benefits of virtualization to GPU workloads.Last edited: Mar 28, 2018 -
Jensen's Coffee Delivery Bot & AI at GTC
Published on Mar 28, 2018
These collaborative working robots were shown at GTC. Veo Robotics works with humans, Kinema packs pallets, and NVIDIA's delivers coffee to Jensen.
We show a few cool robots at GTC 2018. This was stuff that was just kind of neat -- not necessarily directly related to our audience, but stuff that you all might find interesting or amusing. The AI bots in this video are one half collaborative working (augmenting human abilities) and one half rudimentary -- closing and opening drawers, for instance.
-
Try our new driverless car software says Nvidia, as it suspends driverless car trials
Post crash test hits share price
By Gareth Corfield 28 Mar 2018 at 18:32
https://www.theregister.co.uk/2018/03/28/nvidia_halts_driverless_trials_announces_simulator/
"Nvidia chief exec Jen-hsung Huang waving a Drive PX board"
"Nvidia has declared the creation of a “cloud-based system” for testing driverless cars – just as it, er, suspended testing of driverless cars.
Hot on the heels of its announcement of its Drive Constellation system, the chipmaker then quietly suspended autonomous vehicle tests that use its technology after a pedestrian was killed by an Uber vehicle operating autonomously in Arizona.
"Ultimately AVs will be far safer than human drivers, so this important work needs to continue. We are temporarily suspending the testing of our self-driving cars on public roads to learn from the Uber incident," a Nvidia spokesperson told CNN. "Our global fleet of manually driven data collection vehicles continue to operate."
The announcement saw Nvidia's stock price fall by as much as 10 per cent before recovering slightly.
The Drive Constellation system is a two-server offering intended for simulated trials of autonomous vehicle software. One server runs Nvidia’s Drive Sim software, which simulates sensors such as cameras, radar and LIDAR, while the other server simulates “the complete autonomous vehicle software stack”, using sensor data generated by the first server.
“With virtual simulation, we can increase the robustness of our algorithms by testing on billions of miles of custom scenarios and rare corner cases, all in a fraction of the time and cost it would take to do so on physical roads,” Nvidia’s Rob Csongor, a company veep, said in the inevitable canned statement.
Evidently Nvidia, along with the other firms that have adopted its Drive PX autonomous driving suite for their auto autos, will now have even more time on their hands to road-test the Drive Constellation.
At the CES consumer tech knees-up in Las Vegas last year, Nvidia chief exec Jen-hsun Huang promised to put Level 4 driverless cars on the roads by 2020, in partnership with Audi. It would appear there is a significant speed bump along that particular road."
Comments -
HW News - Next-Gen Gaming GPUs, GPP (Cont'd), Meltdown Performance
Published on Apr 1, 2018
We talk about hardware news for the past week, including next-generation gaming GPUs (i.e. Turing, Ampere), Meltdown performance, GPP, and more.
4x Volta Custom EK Water Cooling (& DGX-2 Torn Down)
Published on Mar 29, 2018
We got a hands-on with the DGX-2 and DGX1 Station, the latter of which has 4x Volta V100s under a custom EK water cooling loop.
GDDR6 Price, New GPU Launch Timelines, & Mass Production
Published on Mar 27, 2018
SK Hynix's GDDR6 memory is coming soon, and it'll land on the next-generation of nVidia GPUs -- the fabled GTX 1180 (or 2080) sometime... eventually.
Last edited: Apr 1, 2018 -
RAID No More: GPUs Power NSULATE for Extreme HPC Data Protection
Published on Apr 2, 2018
"In this video from GTC 2018, Alexander St . John from Nyriad demonstrates how the company's NSULATE software running on Advanced HPC gear provides extreme data protection for HPC data.
"RAID6 was standardized in 1993 in an era of single-core computing. For exascale computing, RAID is an obstacle to higher performance and resilience. NSULATE revolutionises the role of the storage controller by replacing a fixed-function RAID controller with a powerful general-purpose GPU. Using a GPU as a storage controller enables the calculation of several storage functions on the same high performance controller, enabling more efficient storage processing without sacrificing performance. This enables modern storage appliances to deliver unprecedented speed, scale, security, storage efficiency and intelligence in real-time.
Extreme Resilience
NSULATE offers extreme data resilience. It uses a GPU to generate erasure encoded parity calculations to enable automatic data recovery on scales impossible with a RAID card or a CPU.
While traditional RAID and erasure coding solutions support parity calculations between 2 and 6, NSULATE supports real-time Reed-Solomon erasure coding up to 255 parity. Stable I/O throughput can be maintained even while experiencing dozens of simultaneous device failures and corruption events across an array.
Continuous Verification
NSULATE adds support for cryptographic data verification and recovery to all storage applications. NSULATE includes a complete suite of hash functions for corruption detection and recovery, including CRC32C as well as the NIST compatible cryptographic hash functions, SHA2 & SHA3. NSULATE also includes support for blockchain cryptographic hash functions SHA2 Merkle & SHA3 Merkle for blockchain auditable storage solutions."
Learn more: http://www.nyriad.com/products/nsulate/
and
https://www.advancedhpc.com/ "
Inside the new NVIDIA DGX-2 Supercomputer with NVSwitch
Published on Apr 2, 2018
"In this video from the GPU Technology Conference, Marc Hamilton from NVIDIA describes the new DGX-2 supercomputer with the NVSwitch interconnect.
"The rapid growth in deep learning workloads has driven the need for a faster and more scalable interconnect, as PCIe bandwidth increasingly becomes the bottleneck at the multi-GPU system level.
NVLink is a great advance to enable eight GPUs in a single server, and accelerate performance beyond PCIe. But taking deep learning performance to the next level will require a GPU fabric that enables more GPUs in a single server, and full-bandwidth connectivity between them.
NVIDIA NVSwitch is the first on-node switch architecture to support 16 fully-connected GPUs in a single server node and drive simultaneous communication between all eight GPU pairs at an incredible 300 GB/s each. These 16 GPUs can be used as a single large-scale accelerator with 0.5 Terabytes of unified memory space and 2 petaFLOPS of deep learning compute power."
Learn more: https://www.nvidia.com/en-us/data-cen... "KY_BULLET likes this. -
Disney showed off what Real Time Ray tracing of Star Wars looks like with 8 Nvidia GPUs via SVGN.io
Published on Apr 1, 2018
Shaky video of the last minute or so of a session at GTC 2018 where Disney showed off what Real Time Ray tracing of Star Wars looks like with 8 Nvidia GPU's. Only 8 GPUS. This is possible thanks to AI Denoising technology.
The session was called: S8414 - Walt Disney Imagineering Technology Preview: Real-time Rendering of a Galaxy Far, Far Away
Walt Disney Imagineering strives to create amazing guest experiences at Disney Parks worldwide. Partnering with Nvidia and Epic Games, Imagineering has developed new technology to drive one of the key attractions at the upcoming Star Wars: Galaxy's Edge opening in Disneyland Resort, CA and Disney's Hollywood Studios, FL. Come learn more about how we took advantage of the newest in Nvidia hardware and the technical modifications that we made for the Unreal Engine which will allow 8 GPUs to render at unprecedented quality and speed.
Silicon Valley Global News SVGN.ioKY_BULLET likes this.
Nvidia GTC 2018 - March 26th - 29th
Discussion in 'Gaming (Software and Graphics Cards)' started by hmscott, Mar 26, 2018.