No one has mentioned if the laptops with the MX-150 10w version is cheaper than the laptops with the MX-150 25w version.
It might be easier to forgive if the vendors are selling the 10w version of the MX-150 for a lot less $$$.![]()
Nvidia Selling Worse GPU's To Consumers?
-
-
I mean we can overclock right? But why should we have to and be sold an mx150 thinking there was only the one model. I thought all the OEMs were underclocking themselves until I saw the articles. Its just under handed.
Still Id love one on a nice display like the Dell QHD 13" or the new 4k. So I guess at least some companies are at least putting a GPU in there. Retina is nice but a complete waste without a GPU let alone quad core. -
Looking at the 940mx synthetics vs the underclocked Asus mx150, there is nothing in it. And to make it worse the iris 640 gets very similar synthetics (but not in real world games). This is a bit of an annoyance. Even more annoying the MX130 3dmark11 is on par with the underclocked mx150 and 940mx. Not Cosha nvidia, not at all.
Dr. AMK likes this. -
Whats more annoying is the Surface Book 1 940m model is the same price as the Asus here. And its display is one of the best in the business. Not to mention the whole surface perspective, as a designer its quite appealing. Windows hello camera. Hmmmm
Dr. AMK likes this. -
Rumors flying as GTC 2018 approaches...
Nvidia GTC 2018 - March 26th - 29th
http://forum.notebookreview.com/threads/nvidia-gtc-2018-march-26th-29th.814949/Last edited: Mar 26, 2018Dr. AMK likes this. -
https://www.nvidia.com/en-us/design-visualization/quadro-store/
Big Volta Comes to Quadro: NVIDIA Announces Quadro GV100
by Ryan Smith on March 27, 2018 1:30 PM EST
https://www.anandtech.com/show/12579/big-volta-comes-to-quadro-nvidia-announces-quadro-gv100
Along with today’s memory capacity bump for the existing Tesla V100 cards, NVIDIA is also rolling out a new Volta-based card for the Quadro family. Aptly named the Quadro GV100, this is the successor to last year’s Quadro GP100, and marks the introduction of the Volta architecture into the Quadro family.
As a consequence of NVIDIA’s GPU lines bifurcating between graphics and compute, in the last couple of years the Quadro family has been in an odd spot where it straddles the line between the two. Previously the king of all NVIDIA cards, instead the Quadro family itself has been bifurcated a bit, between the compute GPU-derrived cards like the Quadro GP100 and now GV100, and the more pure graphics cards like the P-series. The introduction of the Quadro GV100 in turn looks to maintain status quo here, delivering an even more powerful Quadro card with chart-topping graphics performance, but also the GV100’s GPU’s strong compute heritage.
NVIDIA Quadro Specification Comparison
GV100 GP100 P6000 M6000
CUDA Cores 5120 3584 3840 3072
Tensor Cores 640 N/A N/A N/A
Texture Units 320 224 240 192
ROPs 128 128 96 96
Boost Clock ~1450MHz ~1430MHz ~1560MHz ~1140MHz
Memory Clock 1.7Gbps HBM2 1.4Gbps HBM2 9Gbps GDDR5X 6.6Gbps GDDR5
Memory Bus Width 4096-bit 4096-bit 384-bit 384-bit
VRAM 32GB 16GB 24GB 24GB
ECC Full Full Partial Partial
Half Precision 29.6 TFLOPs? 21.5 TFLOPs N/A N/A
Single Precision 14.8 TFLOPs 10.3 TFLOPs 12 TFLOPs 7 TFLOPs
Double Precision 7.4 TFLOPs 5.2 TFLOPs 0.38 TFLOPs 0.22 TFLOPs
Tensor Performance 118.5 TLFOPs N/A N/A N/A
TDP 250W 235W 250W 250W
GPU GV100 GP100 GP102 GM200
Architecture Volta Pascal Pascal Maxwell 2
Manufacturing Process TSMC 12nm FFN TSMC 16nm TSMC 16nm TSMC 28nm
Launch Date March 2018 March 2017 October 2016 March 2016
While NVIDIA’s pre-brief announcement doesn’t mention whether the Quadro GP100 is being discontinued, the Quadro GV100 is none the less the de facto replacement for NVIDIA’s last current-generation Big Pascal card. The official specifications for the card put it at 14.8 TFLOPs of single precision performance, which works out to a fully-enabled GV100 GPU clocked at around 1.45GHz. This is only a hair below the mezzanine Tesla V100, and ahead of the PCIe variant. And like the capacity-bumped Tesla cards, the Quadro GV100 ships with 32GB of natively ECC-protected HBM2. This finally gets an NVIDIA professional visualization card to 32GB; the GP100 was limited to 16GB, and the Quadro P6000 tops out at 24GB.
On the features front, the card also ships with NVIDIA’s tensor cores fully enabled, with performance again in the ballpark of the Tesla V100. Like the Quadro GP100’s compute features, the tensor cores aren’t expected to be applicable to all situations, but there are some professional visualization scenarios where NVIDIA expects it to be of value. More importantly though, the Quadro GV100 continues the new tradition of shipping with 2 NVLink connectors, meaning a pair of the cards can be installed in a system and enjoy the full benefits of the interface, particularly low latency data transfers, remote memory access, and memory pooling.
At a high level, the Quadro GV100 should easily be the fastest Quadro card, a distinction the GP100 didn’t always hold versus its pure graphics siblings, and that alone will undoubtedly move cards. As we’ve already seen with the Titan V in the prosumer space – NVIDIA dodging expectations by releasing the prosumer Volta card first and ProViz card second – the Titan V can be a good deal faster than any of the Pascal cards assuming that software is either designed to take advantage of the architecture, or at least meshes well with NVIDIA’s architectural upgrades. Among other things, NVIDIA is once again big into virtual reality this year, so the GV100 just became their flagship VR card, a convenient timing for anyone looking for a fast card to drive the just-launched HTC Vive Pro.
However the GV100’s bigger calling within NVIDIA’s ecosystem is that it’s now the only Quadro card using the Volta architecture, meaning it’s the only card to support hardware raytracing acceleration, vis a vie NVIDIA’s RTX technology.
Announced last week at the 2018 Game Developers Conference, RTX is NVIDIA’s somewhat ill-defined hardware acceleration backend for real-time raytracing. And while the GDC announcement was focused on the technology’s use in games and game development, at GTC the company is focusing on the professional uses of it, including yet more game development, but also professional media creation. Not that NVIDIA expects movie producers to sudden do final production in real-time on GPUs, but as with the game asset creation scenario, the idea is to significantly improve realism during pre-production by giving artists a better idea of what a final scene would look like.
Along with Microsoft’s new DirectX Raytracing API, the RTX hardware will also be available within NVIDIA’s OptiX ray tracing engine – which is almost certainly a better fit for ProViz users – while NVIDIA is also saying that Vulkan support is on tap for the future. And like the game development scenario, NVIDIA will also be looking to leverage their tensor cores here as well in order to use them for AI denoising. Which, given the still limited raytracing performance of current hardware, is increasingly being setup as the critical component for making real-time ray tracing viable in 2018.
Otherwise, the Quadro GV100 looks to be a fairly standard Quadro card. TDP has gone up ever slightly from the Quadro GP100 – from 235W to 250W – so while it should generally be drop-in replaceable, it’s not strictly identical. Nor are the display outputs identical; the Quadro GV100 has dropped the GP100’s sole DVI port, leaving it with a pure 4x DisplayPort 1.4 setup. The card also features the standard Quadro Sync and Stereo connectors for synchronized refresh and quad-buffered stereo respectively.
Wrapping things up, the Quadro GV100 is shipping immediately from NVIDIA, and OEMs will begin including it in their systems in June. Official pricing has not been announced, but like the GP100 before it, I would expect this card to run for north of $5,000. (try $8,999 !!)
NVIDIA RTX Technology Delivers Biggest Advance in Computer Graphics in 15 Years
https://nvidianews.nvidia.com/news/...eal-time-ray-tracing?nvid=nv-int-qrg0nt-35251
"GPU Technology Conference — NVIDIA today announced the NVIDIA® Quadro® GV100 GPU with NVIDIA RTX™ technology, delivering for the first time real-time ray tracing to millions of artists and designers.
The biggest advance in computer graphics since the introduction of programmable shaders nearly two decades ago, NVIDIA RTX — when combined with the powerful Quadro GV100 GPU — makes computationally intensive ray tracing possible in real time when running professional design and content creation applications.
Media and entertainment professionals can see and interact with their creations with correct light and shadows, and do complex renders up to 10x faster than with a CPU alone. Product designers and architects can create interactive, photoreal visualizations of massive 3D models — all in real time.
“NVIDIA has reinvented the workstation by taking ray-tracing technology optimized for our Volta architecture, and marrying it with the highest-performance hardware ever put in a workstation,” said Bob Pette, vice president of Professional Visualization at NVIDIA. “Artists and designers can simulate and interact with their creations in ways never before possible, which will fundamentally change workflows across many industries.”
NVIDIA RTX technology was introduced last week at the annual Game Developers Conference. Today NVIDIA announced that it is supported by more than two dozen of the world’s leading professional design and creative applications with a combined user base of more than 25 million customers.
The Quadro GV100 GPU, with 32GB of memory, scalable to 64GB with multiple Quadro GPUs using NVIDIA NVLink™ interconnect technology, is the highest-performance platform available for these applications. Based on NVIDIA’s Volta GPU architecture, the GV100 packs 7.4 teraflops of double-precision, 14.8 teraflops of single-precision and 118.5 teraflops of deep learning performance. And the NVIDIA OptiX™ AI-denoiser built into NVIDIA RTX delivers almost 100x the performance of CPUs for real-time, noise-free rendering.
Additional Benefits
Other benefits of Quadro GV100 with NVIDIA RTX technology include:
Easy implementation though a variety of APIs — Developers can access NVIDIA RTX technology through the NVIDIA OptiX application programming interface, Microsoft’s new DirectX Raytracing API and, in the future, Vulkan, an open, cross-platform graphics standard. All three APIs have a common shader programming model that allows developers to support multiple platforms.
Life-like lighting, reflections and shadows using real-world light and physical properties — GV100 and NVIDIA RTX ray-tracing technology deliver unprecedented speed of cinematic-quality renderings.
Supercharged rendering performance with AI — OptiX AI-accelerated denoising performance for ray tracing provides fluid visual interactivity throughout the design process.
Highly scalable performance — Fast double-precision coupled with the ability to scale memory up to 64GB using NVLink to render large complex models with ease.
Ability to collaborate, design, create in immersive VR — VR ready with the maximum graphics and compute performance available means designers can use physics-based, immersive VR platforms to conduct design reviews and explore photoreal scenes and products at scale.
Broad Support from Software Developers
A broad range of software developers are showing strong support for GV100 and real-time ray tracing:
“We are using the NVIDIA RTX OptiX AI denoiser to bring workflow enhancements to the Arnold renderer and look forward to getting it into the hands of our customers working in animation and visual effects production.” — Chris Vienneau, senior director of Media & Entertainment Product at Autodesk
“The availability of NVIDIA RTX opens the door to make real-time ray tracing a reality. By making such powerful technology available to the game development community with the support of the new DirectX Raytracing API, NVIDIA is the driving force behind the next generation of game and movie graphics.” — Kim Libreri, chief technology officer at Epic Games
“With NVIDIA GV100 GPUs and RTX, we can now do real-time ray tracing. It’s just fantastic!” — Sébastien Guichou, CTO at Isotropix
“We use powerful NVIDIA GPU technologies like the new Quadro GV100 to accelerate our simulation applications and algorithms, and NVIDIA OptiX for fast, AI-based rendering. We’re excited about the potential NVIDIA RTX ray-tracing technology holds to deliver more lifelike images faster than ever.” — Jacques Delacour, CEO and founder of OPTIS
“The new Quadro GV100 with RTX technology delivers unprecedented real-time ray-tracing performance, helping our customers to be first to market, gaining hundreds of thousands of dollars over their competition each year.” — Brian Hillner, SOLIDWORKS Visualize Product Portfolio Manager
Availability
The Quadro GV100 GPU is available now on nvidia.com, and starting in April from leading workstation manufacturers, including Dell EMC, HP, Lenovo and Fujitsu, and authorized distribution partners, including PNY Technologies in North America and Europe, ELSA/Ryoyo in Japan and Leadtek in Asia Pacific.
Learn more about the benefits of the Quadro GV100 for deep learning and simulation."
https://www.nvidia.com/en-us/data-center/dgx-2/
NVIDIA’s DGX-2: Sixteen Tesla V100s, 30 TB of NVMe, only $400K
by Ian Cutress on March 27, 2018 2:00 PM EST
https://www.anandtech.com/show/12587/nvidias-dgx2-sixteen-v100-gpus-30-tb-of-nvme-only-400k
Ever wondered why the consumer GPU market is not getting much love from NVIDIA’s Volta architecture yet? This is a minefield of a question, nuanced by many different viewpoints and angles – even asking the question will poke the proverbial hornet nest inside my own mind of different possibilities. Here is one angle to consider: NVIDIA is currently loving the data center, and the deep learning market, and making money hand-over-fist. The Volta architecture, with CUDA Tensor cores, is unleashing high performance to these markets, and the customers are willing to pay for it. So introduce the latest monster from NVIDIA: the DGX-2.
DGX-2 builds upon DGX-1 in several ways. Firstly, it introduces NVIDIA’s new NVSwitch, enabling 300 GB/s chip-to-chip communication at 12 times the speed of PCIe. This, with NVLink2, enables sixteen GPUs to be grouped together in a single system, for a total bandwidth going beyond 14 TB/s. Add in a pair of Xeon CPUs, 1.5 TB of memory, and 30 TB of NVMe storage, and we get a system that consumes 10 kW, weighs 350 lbs, but offers easily double the performance of the DGX-1. NVIDIA likes to tout that this means it offers a total of ~2 PFLOPs of compute performance in a single system, when using the tensor cores.
Cost $399,000
NVIDIA’s overall topology relies on a dual stacked system. The high level concept photo provided indicates that there are actually 12 NVSwitches (216 ports) in the system in order to maximize the amount of bandwidth available between the GPUs. With 6 ports per Tesla V100 GPU, each running in the larger 32GB of HBM2 configuration, this means that the Teslas alone would be taking up 96 of those ports if NVIDIA has them fully wired up to maximize individual GPU bandwidth within the topology.
AlexNET, the network that 'started' the latest machine learning revolution, now takes 18 minutes
Notably here, the topology of the DGX-2 means that all 16 GPUs are able to pool their memory into a unified memory space, though with the usual tradeoffs involved if going off-chip. Not unlike the Tesla V100 memory capacity increase then, one of NVIDIA’s goals here is to build a system that can keep in-memory workloads that would be too large for an 8 GPU cluster. Providing one such example, NVIDIA is saying that the DGX-2 is able to complete the training process for FAIRSEQ – a neural network model for language translation – 10x faster than a DGX-1 system, bringing it down to less than two days total rather than 15.
Otherwise, similar to its DGX-1 counterpart, the DGX-2 is designed to be a powerful server in its own right. Exact specifications are still TBD, but NVIDIA has already told us that it’s based around a pair of Xeon Platinum CPUs, which in turn can be paired with up to 1.5TB of RAM. On the storage side the DGX-2 comes with 30TB of NVMe-based solid state storage, which can be further expanded to 60TB. And for clustering or further inter-system communications, it also offers InfiniBand and 100GigE connectivity, up to eight of them.
The new NVSwitches means that the PCIe lanes of the CPUs can be redirected elsewhere, most notably towards storage and networking connectivity.
Ultimately the DGX-2 is being pitched at an even higher-end segment of the deep-learning market than the DGX-1 is. Pricing for the system runs at $400k, rather than the $150k for the original DGX-1. For more than double the money, the user gets Xeon Platinums (rather than v4), double the V100 GPUs each with double the HBM2, triple the DRAM, and 15x the NVMe storage by default.
NVIDIA has stated that DGX-2 is already certified for the major cloud providers."Last edited: Mar 27, 2018Dr. AMK and alexhawker like this. -
Facebook drama, GTC and no new GTX 1180 and Metro Exodus Ray tracing: This week in the tech news
Nvidia Has Some Big Challenges Ahead: Kevin O'Leary and Jason Calacanis On Tech Stocks | CNBC
Published on Mar 28, 2018
Kevin O'Leary, O'Shares, and Jason Calacanis, Inside.com, discuss tech stocks and the companies investors should watch.
Last edited: Mar 28, 2018Dr. AMK likes this. -
GTX 1180 Coming In July With GDDR6?!
-
Looking to nab Nvidia's GeForce chips? You need cash and patience
GPU shortage equals four-month wait time for buyers
By Katyanna Quach 30 Mar 2018 at 20:07
https://www.theregister.co.uk/2018/03/30/nvidia_geforce_chips_crypto_miner_induced_shortage/
"Tech companies are suffering setbacks from the shortage of Nvidia’s GPUs, with the GeForce series being hit the hardest.
Following his keynote speech at his biz's GPU Technology Conference in San Jose this week, Nvidia CEO Jensen Huang explained to journalists that it was down to intense Ethereum mining.
Vendors working at the booths during the conference told The Register that lead times for building GPU workstations and servers have increased. Nvidia’s GeForce chips, an older and cheaper series, are most in demand – especially the GTX 1080 Ti.
A chart on PCPartPicker, a website that helps users compare prices for computer parts, shows clear spikes in the price for GeForce cards from the past 18 months. GTX 1080Tis are now about $1,000, a rise of about $300 compared to prices in March last year.
A sales manager from Tyan Computer Corporation, a Taiwanese-based company that manufactures motherboards for high-end servers, told us that it put in an order for about 300 GeForce chips in January, and only received 100 of them recently.
He estimated that the wait time could be as long as 16 weeks. An employee from Exxact, who also offer high performance computing servers and racks, said it was about a 12 to 16 week wait.
Ethereum is the most popular type of cryptocurrency where GPUs are favored over ASIC chips for mining. Since GeForce chips are much cheaper than Nvidia’s Tesla Volta V100 chips that go for several thousands of dollars, miners get more bang for their buck.
The wait times for the chips in the Tesla series is much shorter and is roughly on the order of two to four weeks. Nvidia produce a lower volume of Tesla chips, aimed at more intensive workloads for deep learning and AI.
Huang said Nvidia is not in the business for cryptocurrency mining, and wants to keep GeForce chips for the gaming industry. In fact, the GPU giant updated its end-user license agreement last year in an attempt to force customers to cough up for its higher end gear like the Tesla V100 chips in data centers.
An Nvidia spokesperson previously told us: “GeForce and Titan GPUs were never designed for data center deployments with the complex hardware, software, and thermal requirements for 24x7 operation, where there are often multi-stack racks. To clarify this, we recently added a provision to our GeForce-specific EULA to discourage potential misuse of our GeForce and TITAN products in demanding, large-scale enterprise environments.”
When El Reg asked Nvidia how long it would take to clear its backlog of orders, a spokesperson told us: “We have nothing further to add.”"Dr. AMK likes this. -
Your Nvidia GPU Might Burn Up & Die?
Reference Nvidia GTX 780s/780Tis/Titan(XM)s/980Tis are burning out their memory inductors a lot.
No, those aren't April 1st videos... these are:
Happy April 1st...
http://forum.notebookreview.com/threads/happy-april-1st.815115/Dr. AMK likes this. -
LOOOL (1. April today
)
"The 1180 Ti is an absolute monster by all accounts, with double the CUDA cores of NVIDIA’s mammoth GV100 GPU and a near 70% the boost clock speed you’re looking at performance that would resurrect the dead! With 10752 CUDA cores on tap we’re talking about a GPU that would even trounce two GTX Titan V cards in SLI. Forget about 120Hz 4K gaming, we’re in 8K gaming territory now."
https://wccftech.com/nvidia-geforce...ores-2-5ghz-clock-titan-crushing-performance/ -
Papusan likes this.
-
GTX 1180 Ti Pricing, Elite GeForce Experience & GeForce Edge
NVIDIA is introducing a brand new service called Elite GeForce Experience which will allow users to download game-ready drivers a week in advance of game releases, share and download overclocking profiles as well as enjoy NVIDIA partnered titles for free.
The service costs $69 a month and users are required to sign up for a year before they can purchase a GTX 1180 Ti. To activate your subsrcription you’re required to attend a two week hardware re-education course at NVIDIA’s headquarters in Santa Clara California, as well as take a full biometric scan and provide a DNA sample.
As soon as your subsrciption expires your GTX 1180 Ti will be deactivated and you will need to renew your subscription and ship another DNA sample to NVIDIA to have your subscription re-verfified
And look on the specsDr. AMK and alexhawker like this. -
-
NVIDIA Tesla GPUs Supercharge Discovery at GTC 2018
-
"Not trying to stir up controversy here. I've just seen way too many of these in way too short a time period to ignore it."
Reference Nvidia GTX 780s/780Tis/Titan(XM)s/980Tis are burning out their memory inductors a lot. -
NVIDIA GeForce GT 1030 Shipping with DDR4 Instead of GDDR5
"Will this memory swap affect real-world performance? Probably. However, we won't know till what extent without proper testing. Unlike the GeForce MX150 fiasco, manufacturers were kind enough to let consumers know the difference between both models this time around. The lower-end DDR4 variant carries the "D4" denotation as part of the graphics card's model or consumers can find the denotation on the box. Beware, though, as not all manufacturers will give you the heads up. For example, Palit doesn't." -
NVIDIA Pulling MORE BS! - WAN Show April 6 2018
00:20:35 - NVIDIA quietly rolls out slower, lower TDP GT 1030 with DDR4 VRAM
00:59:55 - ASUS AREZ
Published on Apr 6, 2018
Dr. AMK likes this. -
I pulled out on the UX331UN. Im an Asus person and an Nvidia guy. I just felt they needed more disclosure on the item
Looking at the releases recently. Im excited for what the next gen MS Surface Studio might get.
I get to use one and as a designer its the finest hardware on the planet, well ahead of its time, probably like MS Andromeda is going to be for the mobile arena. If apple weren't so lazy they could have had 1 OS like MS by now. Certainly on the Ipad Pro it makes total sense with their use case being designers and artists, who need to import files and use full sized apps like Light Room, not only this but be able to seamlessly work with Mac OS. Mac OS on IPP WILL take work, the touch interface is based on generations of iOS, but nothing great come easy.
Not only this, with the power of the A11x, the IPP will kill the android high end market, imagine another notebook level powered notebook like the Surface Pros but a slightly different audience. Then they can add a trackpad/mouse and no more moaning, oh people just use a pencil. There are many use cases where a mouse is easy.
Anyway back to the Studio, I love it, but I dont like GFX Card. I feel they need to go studio based, which may push the prices even higher. But in a design house a $10k machine shouldn't be a big sell and the GFX should be high end Quadro cards.
I wonder if my Andromeda device will fit the Studio colour wheel on it rofl. -
-
GTC 2018 - NVIDIA CEO Jensen Huang Keynote Supercut
-
Hades Canyon Review: AMD+Intel Threaten NVIDIA (NUC8i7HVK)
Published on Apr 11, 2018
Intel and AMD worked together on Hades Canyon, the new NUC that combines i7 Coffee Lake CPUs with AMD Vega M GPUs.
Dr. AMK likes this. -
Support.2@XOTIC PC Company Representative
Dr. AMK, Vistar Shook and hmscott like this. -
Dell and HP Not Interested in Jumping on the NVIDIA GPP Bandwagon
by Chino Friday, April 13th 2018 10:48
https://www.techpowerup.com/243322/dell-and-hp-not-interested-in-jumping-on-the-nvidia-gpp-bandwagon
"Our colleague Kyle Bennett from HardOCP has spoken with his trusted industry sources and found out that big names like Dell and HP haven't penned the deal with NVIDIA to join the GeForce Partner Program (GPP). HP recently introduced their updated Pavilion Gaming lineup with both AMD and NVIDIA graphics card options, which goes to show that the computer giant hasn't aligned its gaming brand exclusively with NVIDIA. On the other hand, their Omen Gaming boxes weren't available with AMD graphics cards, which Kyle has noted could be a product of a supply issue. In other news, NVIDIA hasn't been able to convince Lenovo, one of the big three OEMs, to join their cause either. Lenovo Legion gaming products were still listed on their website with graphics cards from the red team. HardOCP has reached out to NVIDIA once again to inquire about which brands have comitted to GPP, but they were met with silence.
While brands like ASUS, Gigabyte, and MSI are siding with NVIDIA, Dell and HP are the real big players in the game. No other manufacturer comes close to purchasing and moving the amount of mid-end and low-end graphics cards from NVIDIA like those two do. It doesn't really come as a surprise why NVIDIA wants them to jump onboard so desperately. Kyle's behind-the-scene conversations with this sources suggest that neither Dell or HP will NVIDIA twist their arms as they consider GPP to be unethical and illegal."
Discuss (17 Comments)
NVIDIA GPP Opposition Grows, HP & Dell Refuse to Join, Intel Considering Legal Action
By Khalid Moammer, 14 hours ago
https://wccftech.com/nvidia-gpp-opposition-grows-hp-dell-say-no-intel-mulling-legal-action/
"NVIDIA’s highly controversial and allegedly anti-competitive GeForce Partner Program is being met with significant opposition from the world’s largest PC makers, HP & Dell, Kyle Bennett reports. This news comes after a report earlier last month revealed that NVIDIA had been courting the biggest three names in the graphics card add-in-board market, Asus, MSI & Gigabyte to join the GPP.
The program allegedly requires that partners align their gaming brands exclusively with NVIDIA, effectively pushing AMD out of the PC gaming market. Companies that choose not to sign up are allegedly denied “ high-effort engineering engagements — early tech engagement — launch partner status — game bundling — sales rebate programs — social media and PR support — marketing reports “ in addition to priority GPU supply by NVIDIA, putting them at a crippling competitive disadvantage.
Yesterday, Bennett published direct excerpts from documents relating to the program, underlying it’s anti-competitive and potentially illegal nature that would eventually limit consumer choice. Again, we should note that NVIDIA has publicly denied these allegations.
NVIDIA GPP Opposition Grows
Last month we covered the wide-spread backlash and uproar that had engulfed the tech-sphere after a growing body of evidence had accumulated indicating that Gigabyte, MSI and even Asus might have already singed on. Calls for a boycott of NVIDIA and its alleged GPP partners spread like wildfire all over tech forums, reddit and social media. Some have gone as far as to contact the Federal Trade Commission and the EU Commission calling for the program to be investigated for its alleged anti-consumer and anti-competitive aspects.
Unfortunately all companies involved are keeping quiet about the GeForce Partner Program, the very same one that NVIDIA has ironically described as “transparent” and great for gamers. Despite these claims, the company continues to refuse to answer some of the most basic questions about the program, like which companies have signed up. The company has reportedly told Kyle Bennett, who has broken the story, that it has “moved on” from the story. Seemingly in hopes that by keeping quiet the story would go away.
HP & Dell Refuse to Join NVIDIA’s GPP
Today however, we can bring you some good news, in essence that HP and Dell have reportedly said no to the GPP. Citing legal and ethical concerns in off the record conversations.
Kyle Bennett – April 12, 2018
“Off the record conversations suggest that both of these companies think that NVIDIA GPP isunethical, and likely illegal as it pertains to anti-competition laws here in the United States. The bottom line is that Dell and HP are very much upset with NVIDIA over GPP, and Dell and HP look to be digging in for a fight.“
Intel Gearing up for a Fight Over NVIDIA’s GeForce Partner Program
Bennett, also reports that Intel is very much aware of the GPP and the negative impact it will likely have on sales of its Kaby Lake G processor, which features a custom made Radeon GPU that AMD is producing specifically for the chip giant. Bennett goes on state that he expects Intel to initiate legal action against NVIDIA over the GeForce Partner Program.
Kyle Bennett – April 12, 2018
“The other unknown in this is Intel. Big Blue is very much aware of what is going on, and GPP could very much impact the sales of its Kaby Lake-G part that contains a GPU that was built by AMD specifically for Intel. I would expect we are going to see legal action initiated on NVIDIA GPP by Intel at some point in the future.”
PC Gamers Already Channeling Anger Into Action
I have to say that I’m not overly surprised about neither of these developments. While Asus, Gigabyte and MSI are large, Intel, HP and Dell are simply giants in the industry and will not be pushed around. What ostensibly began as a move to consolidate more power in a market that NVIDIA already had commanded, appears to be backfiring.
We’re already seeing the effects and ripples this has had on the company’s image among PC gamers. In a recent poll we ran, 82% of you said you would boycott the company and its GPP partners over what is now developing into an all out scandal.
More outspoken individuals have even proceeded to contact the FTC and EU Commission to file legal complaints.
PC gamers have picked the PC as their platform of choice because of the incredible freedom that it offers. Begin to chip away at what has made this platform so great and what this passionate community has cherished for decades and you will awaken a sleeping giant.
The GPP could very well prove to be a grave mistake for the company and a very costly lesson."Last edited: Apr 14, 2018Dr. AMK likes this. -
-
Geforce Partner Program FORCES Brand Segregation
Published on Apr 17, 2018
Asus confirms AMD exclusive line of AMD graphics cards and AMD blog post calls for openness and transparency in opposition of the Nvidia Geforce Partner Program.Dr. AMK likes this. -
The FTC And EU Commission Investigating NVIDIA's GPP?
Published on Apr 17, 2018
More Trouble For NVIDIA's Infamous GPP - Holy Crap!
Dr. AMK likes this. -
Nvidia Max-Q limits fan noise to 40 dBA when gaming - so why are we recording louder results?
"39 dBA" is quite the lofty goal for the super-thin Asus Zephyrus
Not all Max-Q laptops are created equal.
by Allen Ngo, 2017/11/27
https://www.notebookcheck.net/Nvidi...are-we-recording-louder-results.258636.0.html
"We've all been in this situation. The manufacturer promises a 15-hour battery life on a new laptop and yet owners would be lucky to get half or three-quarters of the advertised claim. The large discrepancies can be boiled down to two causes: differences in testing conditions and usage scenarios between the manufacturer and end-user. Companies like Lenovo and HP often rely on MobileMark standards when testing battery life that may underestimate what most users will actually be doing on their notebooks.
This same concept has seemingly carried over to Nvidia's new Max-Q series. The chipmaker explicitly states that its Max-Q laptops target "40 dBA or better" when running "typical gaming loads" but has otherwise failed to make public the exact testing conditions for such a claim. Even Tom's Hardware is unable to disclose the processing load or game title Nvidia is using to simulate "typical gaming" conditions.
What little has been publicly communicated include Nvidia's vague 25 cm measurement from some distance "above the laptop" in a hemi-anechoic environment in accordance to the ISO 7779 standard. Since the setup and loads are not public information, we cannot reproduce or verify the "40 dBA or better" statement and are expected to simply trust Nvidia's advertisement the same way we are expected to trust advertised battery life results.
We'd love to prove the "maximum of 40 dBA" claim, but we just don't know the conditions Nvidia is using
Nvidia does not detail the setup, the notebook power settings, the game, the testing length, or anything else other than the vague "25 cm distance" and "typical gaming load"
The secrecy of Nvidia's testing methodologies to meet the 40 dBA criteria only exacerbates the discrepancies between manufacturer and end-user conditions. Companies cannot account for every permutation of the end-user experience and thus must approximate it similar to how Nvidia cannot possibly account for every type of game and their varying demands on the CPU and GPU. As a result, there is no guarantee that a Max-Q notebook will be 40 dBA or quieter depending on how far the end-user is sitting from the notebook, the type of game running, how long the game will be running for, the ambient temperature, the paired CPU, FPS limiters, and other potential variables not detailed by the GeForce maker.
While we can't discredit Nvidia's 40 dBA claim, we can measure fan noise with our own tools and have been doing so on a handful of Max-Q notebooks. It's true that our in-house measurements are not directly comparable to Nvidia, but we've discovered that all our tested Max-Q notebooks can run indubitably louder than 40 dBA with surprisingly wide RPM ranges when subjected to extreme 100 percent utilization loads (Prime95 + FurMark). Some notebooks top out at 50+ dBA to be exceptionally louder than 40 dBA considering that the decibel scale is logarithmic.
The results make sense when reading a bit deeper into Nvidia's 40 dBA advertisement. The 40 dBA limit applies only to "typical gaming loads" and so any processing loads more demanding than gaming would be fair game for higher fan speeds. If a specific game happens to be more demanding than Nvidia's mysterious "typical gaming load" scenario, then there is potential for louder fan noise.
Our list of tested Max-Q laptops so far. Most are actually quite close to 40 dBA when running Witcher 3, but a few are really pushing the boundaries. Generally, however, fan noise when gaming is still quieter than most "standard GTX" gaming notebooks
Our own Notebookcheck test criteria utilize Witcher 3 to represent "real world" gaming loads with fan noise results provided in the table above. A good handful of these Max-Q notebooks are quite close to the 40 dBA mark while a select few are clearly not. The questionable models include the Gigabyte Aero 15X, Dell Alienware 15, and MSI GS73VR 7RG at 44 dBA, 48.0 dBA, and 45.8 dBA, respectively, when running Witcher 3 to be over twice or even thrice as loud as 40 dBA each. We consider systems in the 40 to 44 dBA range to be reasonable since our microphone is always placed 15 cm in front of the notebook whereas the Max-Q standard places the microphone further away at 25 cm elevated to eye level. The cut off point, of course, is shaky ground nonetheless.
We can only assume that Nvidia's testing procedures have been designed to simulate real-world gaming conditions as closely as possible to produce more relevant data for the end user. Yet, when compared to our own Witcher 3 load measurements, the notable differences between 40 dBA and 45+ dBA in some Max-Q notebooks are hard to ignore. Are our Witcher 3 testing conditions more demanding than Nvidia's masked testing conditions? Is Nvidia utilizing a less demanding title for "typical gaming loads"? While it's important to note that we do not test in a hemi-anechoic chamber, it is nonetheless difficult to pinpoint what other variables may be responsible for the very dissimilar results and what we can say specifically about Nvidia's procedures.
Our takeaway message is that users shouldn't read the 40 dBA advertisement as a common ceiling across all Max-Q notebooks. Instead, it should be taken as a median approximation that's highly dependent on various factors including the games themselves. Max-Q fan noise can and will vary between manufacturers in response to onscreen loads - just like any standard non-Max-Q gaming notebook would."Dr. AMK and Vistar Shook like this. -
NVIDIA GeForce GTX 1180 – Specs, Performance, Price & Release Date (Preliminary)
I’d be remiss if I did not mention that some of this information is preliminary and subject to change as more information is revealed and/or existing information is confirmed or debunked. This page will be kept up to date with the latest information available at any given time.
Specifications
According to the latest leaks, rumors and information provided to wccftech the GeForce GTX 1180 is powered by a 104 class GPU, codenamed GT104. The GPU measures around ~400mm², features 3584 CUDA cores, a 256-bit GDDR6 memory interface and 8 to 16 gigabytes of 16Gbps GDDR6 memory. The graphics card is expected to have a core clock of around ~1.6GHz and a boost clock of around ~1.8GHz. The TDP of this graphics card is unconfirmed to date, but is expected to be somewhere between 170-200W.
Performance
Peak FP32 compute performance is expected to be around 13 TFLOPS, depending on how high of a clock speed the graphics card will be able to hit and how often. This puts it slightly ahead of the existing GeForce GTX Titan Xp and GTX 1080 Ti. So you can expect around GTX Titan Xp performance or slightly better, but at significantly lower power consumption.
Dr. AMK, Spartan@HIDevolution, Vistar Shook and 1 other person like this. -
Dell and HP Resist the NVIDIA GPP Leash - So Far
By Kyle Bennett, Thursday, April 12, 2018
Since we found out about NVIDIA's GeForce Partner Program, aka GPP, a couple of months ago, we have seen changes implemented by NVIDIA's partners, but what has not happened is far more important to point out at this time.
https://www.hardocp.com/article/2018/04/12/dell_hp_resist_nvidia_gpp_leash_so_far
"This is a quick followup to our article entitled GeForce Partner Program Impacts Consumer Choice that HardOCP published a month ago. Since then there have been a lot of developments in the industry that outline that the terms of NVIDIA GPP are exactly as we laid out in that article, specifically and most importantly that NVIDIA is requiring AIBs and OEMs to move their GPU gaming brands exclusively to NVIDIA products. Below is the exact language used by NVIDIA in documents to companies "invited" to be a GPP partner.
As of today we believe that both Dell and HP have NOT signed the GPP contract. I say believe, because neither company or NVIDIA would confirm this on the record. I have had backchannel discussions about this with trusted sources, and this press release story pushed out by The Verge on HP introducing systems in its Pavilion Gaming line with Radeon and NVIDIA GPUs inside recently would suggest GPP is not in the cards for HP. However, its Omen Gaming boxes are now devoid of AMD GPUs at this time. We are hoping this is a supply issue rather than a GPP issue. All of the silence surrounding this certainly reminds us that the First Rule of GPP is, Don't Talk About GPP. But don't fear, NVIDIA has clearly stated, GPP is all about transparency to benefit the gamer.
We did reach out to NVIDIA to again ask what companies were signed up with its GPP, and once again failed to get an answer; again that transparency thing comes to mind. But as we reported a couple of weeks ago, NVIDIA has "moved on" from this story so we don't expect an answer.
Lenovo is the outlier in the big three OEMs, and we are getting little-to-no information about that company. We are unsure if Lenovo has gone with NVIDIA's GPP at this time. From what we are hearing, which is rumor and speculation, we think Lenovo has not signed on with GPP, but we could be wrong on that. However, Lenovo at this time still has its Legion brand gaming systems with Radeon GPUs listed on its site.
Dell and HP not coming on board with GPP is actually a very big deal. Out of all the companies that we think NVIDIA is strong arming into GPP, Dell and HP have the most leverage to push back due to the massive volumes of mid and low-end GPUs that both purchase from NVIDIA. While AMD is not able to compete on the extremely high end, it certainly is making mid-level and low-end GPUs that both Dell and HP have access to. And for what it is worth, the Vega 64 is an excellent gaming card at 1440p which fits the bill for a huge portion of the market on high end gaming systems. NVIDIA may be in a fight to seize these companies gaming brands for their own, which NVIDIA may just lose, and hopefully so.
Dell nor HP are wanting to turn over their gaming brands to NVIDIA. Off the record conversations suggest that both of these companies think that NVIDIA GPP is unethical, and likely illegal as it pertains to anti-competition laws here in the United States. The bottom line is that Dell and HP are very much upset with NVIDIA over GPP, and Dell and HP look to be digging in for a fight.
On the other side of the coin, we see ASUS, Gigabyte, and MSI have already laid their gaming brands at NVIDIA's feet. ASUS has already committed to remove all AMD GPU products that appear under its high end Republic of Gamers brand as it pertains to video cards. (AMD Motherboards will stay ROG.) All AMD cards will now carry its "AREZ" branding. It will be interesting how these cards are marketed through this new Arez brand although ASUS did have and "Ares" brand in the past. Will it be a "gaming brand?" Gigabyte has already been documented as to removing its Aorus gaming brand from AMD GPU products, and MSI has been spotted as doing the same, however neither company has openly announced a new brand specific to AMD GPUs.
It looks as the Asia-based companies have rolled over for their master, NVIDIA, and given away their gaming branding in order to make sure they stay on NVIDIA's good side. The US based companies have not yet heeled to NVIDIA's GPP leash. And NVIDIA may soon find out that there are a couple of big dogs that are left in the yard that might bite.
The other unknown in this is Intel. Big Blue is very much aware of what is going on, and GPP could very much impact the sales of its Kaby Lake-G part that contains a GPU that was built by AMD specifically for Intel. I would expect we are going to see legal action initiated on NVIDIA GPP by Intel at some point in the future.
We also now can share that NVIDIA has specified that it will not extend discounts to non-GPP partners. And what is appalling, but not surprising, is that NVIDIA is denying "priority allocation" to non-GPP partners as well. That basically means your GPU order must have gotten lost in the mail.
"Gaming" brands outsell non-gaming brands 3 to 1 according to the research I have seen on the subject. So to suggest that AMD will not be impacted by GPP is simply not true.
To sum up NVIDIA's actions, if you do not agree to be a part of its GPP, you lose GPU allocation, you lose GPU discounts, you lose rebates, you lose marketing development funds (MDF), you lose game bundles, you lose NVIDIA PR and marketing support, you lose high effort engineering engagements, you lose launch partner status, but you do get to keep the gaming brand that your company has developed over the years.
The carrot and stick metaphor comes to mind here. NVIDIA is telling us that its GPP program is a simple carrot, albeit a carrot that it was supplying willingly before these GPP terms were pushed out. I would suggest to you that North American OEMs are seeing GPP as a stick. As for the Asia based companies, I think they see it as just another normal business day and are still glad to have the job of pulling the wagon. One thing is for certain. Dell and HP see the danger of handing their gaming brands over to NVIDIA. We hope both stand their ground."
DiscussionDr. AMK likes this. -
AMD’s Response to NVIDIA’s GPP - This Means War
Published on Apr 18, 2018
https://wccftech.com/amd-response-to-...
AMD has come out with an official response to NVIDIA’s controversial GeForce Partner Program and it’s a declaration of all out war.
Dr. AMK likes this. -
-
Support.2@XOTIC PC Company Representative
Vistar Shook and Dr. AMK like this. -
NVIDIA Inpainting Uses AI To Magically Rebuild Corrupted Or Damaged Images
NVIDIA researchers have come up with some new artificial intelligence (AI) technology that can help fill in the blanks with respect to images that have either been corrupted or otherwise having missing details. With a team lead by NVIDIA's Guilin Liu, missing pixels can be quickly reclaimed with stunning results.
This process, which NVIDIA calls image inpainting, can be used not only for restoring missing image pixels, but also for removing an unwanted object from a scene and filling it back in. If this sounds familiar, it's because Photoshop has been able to perform similar operations with Content-Aware Fill, which was introduced in the CS5 release.
However, NVIDIA goes a step further given that inpainting was trained using 55,116 random streaks and holes to improve its performance. Another 25,000 were generated to further test the algorithm's accuracy.
Sample masks used during the training process.
“Our model can robustly handle holes of any shape, size location, or distance from the image borders,” wrote the NVIDIA researchers in a paper on inpainting. “Further, our model gracefully handles holes of increasing size.
“To the best of our knowledge, we are the first to demonstrate the efficacy of deep learning image inpainting models on irregularly shaped holes.”
Read more at https://hothardware.com/news/nvidia...-corrupted-damaged-images#DEgAvO90VEu64xTI.99
NVIDIA of course has plenty of high-powered hardware to pull from for its machine learning exercises, and in this case used Tesla V100 GPUs linked up with a cuDNN-accelerated PyTorch deep learning framework.
To see NVIDIA image inpainting in action, simply take a look at the embedded video above. The results are remarkable, especially when looking at woman's "gouged out" eyes being replaced with two new ones. The results aren't always perfect, but it's still a remarkable achievement that shows a lot of promise.
Eventually, NVIDIA says that its image inpainting technique could be integrated into photo editing software.Vistar Shook likes this. -
NVIDIA Starts Disinformation GPP Campaign
Wednesday April 18, 2018
https://www.hardocp.com/news/2018/04/18/nvidia_starts_disinformation_gpp_campaign
"Interesting rumors are now coming out that "Kyle was paid" big bucks for breaking the NVIDIA GPP story. And apparently NVIDIA's disinformation campaign to discredit the story around GPP is rubbing some folks the wrong way. Elric mentions below that "his name is also Brian," so I have to assume that PR at NVIDIA is starting this nastiness. No, I did not get paid for GPP, but I wish I would have. Hell, AMD even gave credit to PCPer for breaking story in its Freedom promo video. Interesting thoughts from Elric below.
Also worth mentioning is that I can tell you for sure that two of the things that NVIDIA told me about GPP are simply lies. Brian Burke of NVIDIA told me this about GPP before we wrote our initial story:
"There is no commitment to make any monetary payments, or discounts for being part of the program."
That is a lie. I have it in writing that is not true. NVIDIA is withholding MDF monies as well as rebates and discounts if you don't go GPP. I have had conversations with people that have confirmed exactly what I have in writing.
NVIDIA is quickly painting itself into a corner, and I can say is that the silence from NVIDIA is deafening, and the rest of us know that too. If GPP was so great for the consumer as NVIDIA states, it would have already put hundreds of thousands of dollars into a PR campaign instead of going silent and telling the tech world it has "moved on" from the GPP story. They want this to just die and go away. Telling fellow journalists in the community that I am a paid mouthpiece of AMD is not below NVIDIA, and that is just a shame. And apparently some other folks don't like the way they are handling all of this either.
But at the end of the day, if the worst thing NVIDIA can say about me is that I get paid to tell the truth, I guess I can live with that."
Discussionalexhawker, Dr. AMK and Vistar Shook like this. -
Nvidia GTX 1180: Equipment and performance leaken-Notebookcheck.com
Nvidia GTX 1180: Equipment and performance leaken
The next generation of GeForce GTX 1180 around the top model could be 50% faster than the current GTX 1080th That at least suggest from the Volta architecture extrapolated data. Rumors are coming from a July release, and the estimated $ 700 price could increase availability problems due to the ongoing mining boom.
by Christian Hintze , 23.04.2018Vistar Shook and Dr. AMK like this. -
Support.2@XOTIC PC Company Representative
That definitely beats the content aware fill I remember, but with the hardware it appears to be currently using how soon would it really make it into a retail software product?Dr. AMK likes this. -
Dr. AMK likes this.
-
Support.2@XOTIC PC Company Representative
-
-
Support.2@XOTIC PC Company Representative
Google was doing some really weird stuff to images with algorithms a while back I think, if you integrated what's already in Google Images into the initial store and then just kept going from there it would be a good start.hmscott likes this. -
"This process, which NVIDIA calls image inpainting, can be used not only for restoring missing image pixels, but also for removing an unwanted object from a scene and filling it back in. If this sounds familiar, it's because Photoshop has been able to perform similar operations with Content-Aware Fill, which was introduced in the CS5 release." -
Support.2@XOTIC PC Company Representative
Yeah, I just don't remember content-aware fill actually being that (accurate?) when I used it. It was an early iteration though (2010-2012 maybe) so I guess it could have improved significantly since then, but this seems like a big jump from that. Earlier content aware fill would likely turn the photos with the eyes cut out into the Pale Man from Pan's Labyrinth instead of adding eyes if memory serves.
That's been almost 10 years though, so I might be wrong.alexhawker and hmscott like this. -
Intel Burnt By Nvidia GPP (?) - RIP Kaby Lake G
UFD Tech
Published on Apr 25, 2018
Unfortunately, the fusion dance that was Kaby Lake G doesn't appear as if it will be hitting the market with the full force that was expected.
-
Support.2@XOTIC PC Company Representative
That's frustrating, I'd be pretty happy to see more KLG laptops.hmscott likes this. -
Support.2@XOTIC PC likes this.
-
Finally Good News. The Greed may run into a Knock out....
NVIDIA and AMD Graphics Card AIBs Are Expecting A Plunge In Demand Of Up To 40% In April
NVIDIA and AMD add-in-board manufacturers including Gigabyte, MSI and TUL are expecting their shipments to drop by as much as 40% in April according to a report by DigiTimes. All of these Taiwan based manufacturers have established their forecasts based on the assumption that cryptocurrency mining demand is going to continue to fall. I wonder how Hard this will hit AMD -
Cryptocurrency Developers Are Protecting AMD And Nvidia's Lucrative Ethereum Tailwind
Motek Moyen , Apr.24.18
https://seekingalpha.com/article/41...cting-amd-nvidias-lucrative-ethereum-tailwind
Monero and SiaCoin Reject Bitmain’s ASIC Miners, Who Could Be Next?
By Julia Magas , MAR 29, 2018
https://cointelegraph.com/news/monero-and-siacoin-reject-bitmains-asic-miners-who-could-be-next
"On March 24 the creators of Monero made an unprecedented statement - the project devlead, Riccardo Spagni, warned that the coin’s protocol would be changed every six months to make the cryptocurrency less appealing to application-specific integrated circuit (ASIC) miners.
The measure was initiated after Bitmain announced a new super powerful Antminer X3 ASIC miner designed specifically for calculations based on the CryptoNight algorithm, which is the basis for such cryptocurrencies as Monero (XMR), ByteCoin (BCN) and AeonCoin (AEON).
The dominance of Bitmain shook the reputation of industry giants AMD and Nvidia, whose shares fell sharply after Wall Street firm Susquehanna reported that Bitmain’s new Ethereum miner would increase its competitiveness on March 26.
Manufacturers of miners monopolize the market
Today, when the cryptocurrency market becomes stagnant, mining may be the only way to earn profit. This forces the largest manufacturers of video cards and specialized ASIC chips to bring new, more productive models of devices to the market. ASIC-based miners overtop competitors’ CPUs and GPUs, creating a real threat of mining concentration between the largest players provided with the most powerful equipment.
Some in the Blockchain community are concerned about such kind of centralization that could damage network security. Actually decentralization based on the miners’ competition helps to defend the system and its participants, protecting the network from intruders. That's why the token developers are forced to create artificial obstacles to the use of ASIC equipment.
Due to the present monopoly on the production of ASIC miners and the expansion of its positions in the mining equipment market as a whole, according to Bernstein analysts, Bitmain earned about $4 bln last year, the same amount as Nvidia. It is noteworthy that Bitmain achieved this level six times faster than Nvidia, who took 24 years to achieve these levels of profits.
Antminer X3 developers promise $4,500 profit per month
As stated by CryptoCompare, the new ASIC can give up to $4,500 in monthly profits to its owner, but its calculation process is based primarily on the involvement of devices in Monero network transactions, which may lead to disrupting XMR network functioning.
Image source: CryptoCompare
The developers of Monero, in their turn, published defamatory posts about the insolvency of Antminer X3. Monero devlead Riccardo Spagni noted on his Twitter page that it “WILL NOT work” for Monero, since Monero's core development group (CDG) is going to perform regular updates of the hashing algorithm.
Moreover, the upcoming hard fork will be aimed at making significant changes to the Proof-of-Work (PoW) protocol in order to prevent potential threats from ASICs.
Just a reminder that this WILL NOT work on Monero https://t.co/rhy6k2I4Yh
— Riccardo Spagni (@fluffypony) March 15, 2018
To prevent the centralization of mining, changes would be regularly made in the protocol, making it impossible to calculate Monero using the new high-performance devices. The first update, preventing XMR mining on any types of ASIC chips, has already been released.
Some experts expressed their support for the official devteam of Monero on the issue of updating the algorithm.
Antonio Moratti, co-founder of the GoByte platform, which uses the NeoScrypt algorithm, said that he “would do the same". He told Cointelegraph:
"GoByte was X11 in the testing phase. And some users have already started mining with ASICs. And we decided for NeoScrypt. Even that the GPU temps are not so good compared to other algos. I think XMR will have a new algo. I would do the same."
David Vorick, the founder of Siacoin, wrote on his Reddit post:
"Bitmain has historically been very greedy, and very willing to sacrifice the well-being of the community, of their customers, and of the ecosystem, if that means they can make a couple of extra dollars."
Surge of hashrate
Monero’s steps to prevent potential threats from new Bitmain equipment could have been caused by the surge of hashrate up to 1,07 GH/s in their transactional network, which was observed in mid Feb. 2018, when the values of XMR tokens’ mining process soared.
Image source: Coinwarz
Some users linked the surge to the subsequent Bitmain announcement to sell used devices. A month ago one popular Reddit user made a post where he suggested that Bitmain might “calculate very thoroughly when to announce and sell them [ASICs] so their customers will be (or think they will be) able to make some pennies”.
Here is how he describes what comes next - “dump their used equipment on the market by batches as the new version batches comes in freshly manufactured”.
Bitmain reputation: developers feel “anxiety”
Earlier, Bitmain had already acquired an ambiguous reputation before the start of sales of a new model. In January, amid negative rumors about the chance of extremely high network values, which could be created by the mass launch of Antminer designed for SiaCoin, the latter refused to support the algorithm at all.
Unexpectedly, Siacoin founder David Vorrick and his ASIC manufacturing company, Obelisk, fell into competition with Bitmain, which almost has a monopoly on Bitcoin mining equipment.
In his Reddit post Vorick expressed dissatisfaction, saying that the developers of Siacoin feel "anxiety".
Later, SiaTech leader Zach Herbert gave an official green light to Bitmain and said they would “not invalidate A3 miners via soft-fork unless Bitmain takes direct action to harm the Sia project”.
After much consideration and discussion, we've decided to not invalidate A3 miners via soft-fork unless Bitmain takes direct action to harm the Sia project. We're incredibly excited about 2018 and will move forward stronger. Full response: https://t.co/I96X8Y6VMi
— Sia Tech (@SiaTechHQ) January 25, 2018
Mine or buy?
Although Antminer X3 can contribute to the production of cryptocurrencies based on CryptoNight technology (DarknetCoin, AeonCoin etc.), the previous equipment for mining XMR usually did not bring a profit comparable to the potential profit from Monero's trading circulation.
According to the analysis made by Reinisfischer XMR mining is “profitable, but not as lucrative as mining ether”, moreover “it would break even after a year of operations”.
For instance in December with $3,880 (the average price of 12 GPUs) one could earn about $1,940 in 10 days,
Image source: Coinmarketcap
while CryptoCompare says mining with GPUs for the same price would bring only $325 per month.
Image source: Cryptocompare
Beyond the immediacy of security issues, in general, Bitmain's activity does not affect the state of individual cryptocurrencies.
That's what co-founder of GoByte platform, which works on the principle of decentralized mining – a potential competitor of the current technology, Antonio Moratti, said to Cointelegraph:
“I don’t think a privacy coin would want to gather a lot of attention. XMR can mature further on its own without any further PR scandal.”
Will CryptoNight algorithm be used?
The fundamental task of CryptoNight is to eliminate the gap in the production of tokens between users of standard PCs and owners of specialized ASIC devices. The algorithm technology is based on allocating a data block with an unpredictable sequence in the computer's RAM, with the data temporarily stored in RAM and not calculated at each access.
Compared with the same Scrypt algorithm, the CryptoNight structure has a number of technical advantages:
- small time intervals between blocks (transaction speed less than 60 seconds),
- smoothly falling emissions,
- less central processing unit (CPU) and graphics card heating than when mining on other algorithms,
- use of CPU + GPU binding and thus achieving faster access to RAM, increasing the speed of transactions.
Bitcoin, Litecoin, Dash, Decred and Sia - one by one these cryptocurrencies have become "victims" of ASIC-miners. The miners could consider the cryptocurrency becoming more centralized after the appearance of specialized devices, although the practice has not yet proven this.
Monero can become the first of the leading cryptocurrencies that will launch a radical means of combating ASIC-miners through the update of CryptoNight hashing algorithm. The further events will show how much the algorithm will be modified and whether this will affect the sale of Antminer X3 models."
And, so far the new hash algorithm's are producing best on the newer Ryzen 2.0 and Vega / Polaris GPU's.Last edited: Apr 25, 2018Dr. AMK likes this. -
Wonder when this whole crypto-currency thing goes the way of the Beta-max :
Article: Crypto Exchanges Pause Services Over Contract Bugs
Article: Bitcoin more vulnerable to attack than expectedLast edited: Apr 25, 2018 -
Support.2@XOTIC PC Company Representative
Fingers crossed it sticks and GPU prices go back to sanity.
Nvidia Thread
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Dr. AMK, Jul 4, 2017.