So it's over?
-
-
Raiderman, Mr. Fox, ajc9988 and 1 other person like this. -
-
1) high bandwidth cache controller allows for 50% increased average fps and double the minimum fps;
2) They have, on one game with one vendor (sniper elite 4), achieved 100% multi-GPU scaling (compare that to Nvidia's during the last couple generations and diminished SLI support);
3) they are working with software developers to take more advantage of both 8C/16T chips and on developing games to better utilize their GPU products.
I could spell out what the possible impact of these things are, but I'd be preaching to the choir. I'm not just meaning their performance, but also how this effects their competitors.
Sent from my SM-G900P using TapatalkLast edited: Feb 28, 2017 -
-
I would not want the second string Ryzen 1600X CPU if it's not the best product they can offer. I have no enthusiasm for any CPU that has locked cores or crippled TDP, so if that happens with any Ryzen CPU we can kick that option to the curb. If everyone has the same experience from a product, I find it boring and useless. I don't appreciate a canned cookie-cutter experience. How well the flagship Ryzen 7 performs at its maximum overclock will be the deciding factor for me. Same for the flagship Vega. There is no joy in owning a PC with belly button components.
As Brother @Johnksss@iBUYPOWER has pointed out several times, they are comparing apples and oranges. Comparing Ryzen 7 (their best) to anything less than the best Intel brings is a marketing mistake. I want to see how the best overclocked Ryzen 7 compares to 5960X overclocked, not a lesser performing CPU. I want AMD to win, but they need to choose the right target or they'll end the race in second place. And, the cost-to-performance ratio rhetoric is junk science that doesn't win any contests.
P870DM3 is the best laptop I have ever owned, especially from a performance perspective. This trumps everything for me. That's why I am interested in seeing how things turn out with Ryzen 7 overclocking, and finding out if I can have that in a laptop. If I can't then it won't be relevant to me unless I go back to desktops. If all we end up with is BGA garbage from AMD, then nothing they can offer on the mobile front will matter to me.
For the record, I would not waste any money on a 7700K (or any other quad core CPU) in a desktop. The only reason I have 7700K in the P870DM3 because it is the most powerful option currently available in a laptop. I'd rather have 5960X in a laptop, but nobody sells that.
To your point, there is no acceptable excuse for the shenanigans that got pulled with musical GPU form factors that blocked GPU upgrades, but we can thank the NVIDIOTS for most of that. That stupid stunt almost ended my appreciation for high performance notebooks and I am still not offering any forgiveness for it.
So yes... I want AMD to win. It needs to be a decisive win, with no caveats or 'yeah-but' strings attached. If we have to put up with lame excuses from AMD with Ryzen and Vega, then they will show themselves to be no better than Intel and NVIDIA.Last edited: Feb 28, 2017tilleroftheearth, Ashtrix, Raiderman and 5 others like this. -
-
-
And, for comparison, this evening is Nvidia's event:
http://www.techradar.com/how-to/how-to-watch-nvidias-gdc-2017-livestream
It will be interesting to see what may come from it as a counter to this presentation (as the 1080 Ti should be discussed, as well as potentially the Pascal part II decision for this year)...Raiderman, Atma, Mr. Fox and 1 other person like this. -
I'm hoping the rumored Pascal refresh might be the topic, then all the Pascal sku's can get upgraded performance. -
Last edited: Feb 28, 2017Ashtrix, Atma, hmscott and 1 other person like this. -
Edit: In other words, caching in on name and brand performance history... -
BTW, I want thoughts from people on this:
AMD said 40% IPC, but delivered 52%. This is already comparable, when clocked the same, with Skylake/Kabylake. Zen+ they are shooting for 20% IPC. Intel is promising 15% IPC with cannonlake, but they claimed 15% with Skylake and Kaby Lake, both of which seem to have fallen short of that in real world testing. So, that being the case, what do you believe should be expected from both Intel and AMD in the future years? -
-
BTW I think that I haven't seen this (about RAM) posted.
-
But, if you compare Intel's march and assume Zen+ comes in 2019, then it means, even if AMD delivers on 20-30%, Intel actually stays in lock-step, with Cannonlake giving 10% and it's next chip giving equivalent. So, unless Zen+ also comes with a 7nm die shrink (2018 GloFo switches over the Fab 8 from 14nm to 7nm), they stay in lock-step, which means more gimmicks and advertising over both platforms (AMDs and Intels), rather than great leaps. Now, this changes in 2020-21, as both have to update the platform due to incorporating PCIe 4.0, DDR5 (which will be trading blows with HBM3, which AMD would be smart to integrate HBM3 and the HBCC with it's CPUs, acting almost as a fourth level cache, with the DDR5 as a backup. Notice how many times Raja kept saying standard ram is dead in his presentation, which might be hinting at use outside of video cards and APUs).
Intel is already pushing Optane to try to prevent the switch to AMD (noticed more news stories mentioning it). But, other than those aspects, I believe they are really scared....
Edit: which really means the best overclocker wins... -
-
This is the best case scenario:
-
Update: Music started... on both twitch and youtube...
http://forum.notebookreview.com/thr...scussion-thread.794965/page-373#post-10473036Last edited: Feb 28, 2017 -
With all that said, i do love knowing the "science", the methodology behind how things work yet I rather explore the inner workings myself instead of hearing what is essentially, marketing rhetoric.tilleroftheearth, Raiderman, ajc9988 and 2 others like this. -
Having one good option will always be bad. Doesn't matter whether it's red, green or blue. I do not care about brand and I will not show any loyalty to Intel, NVIDIA or AMD. I will always want the one that gives me the better product with the most overclocked horsepower regardless of the brand. I do not have warm fuzzies for any of them, but I do enjoy seeing an underdog get the best of an arrogant brat (Intel and NVIDIA). It does not happen that often, but I like seeing it when it does. There always has to be a winner and a loser, but until Intel starts taking some damage all we will have is mediocrity. It doesn't need to be a fair fight, as long as it is a bloody and violent one and both are determined to inflict serious damage on their opponent. If that happens even the fanboys for the losing side win because they get something far better than what they started out with.Ashtrix, Raiderman, ajc9988 and 1 other person like this. -
I was thinking of the Asrock Fatl1ty board
Another note: As we can see from this thread alone (exploding) into conversation, and speculation, the threat alone of some competition from AMD has made many people more interested. I love it, and love the debate, and the competition. This is what PC tech is supposed to be like all of the time, not every seven years. One year Intel is on top, and the next AMD....etc.
That is what will keep the PC alive and kicking. With Microsoft, Intel, etc releasing garbage as of late, the PC as we know it will surely die without this competition.Last edited: Feb 28, 2017Ashtrix, DukeCLR, hmscott and 1 other person like this. -
"Intel believes PC users won't care about the manufacturing process, just the performance of the chips"Ashtrix, DukeCLR, ajc9988 and 1 other person like this. -
-
-
Meanwhile, AMD starts with dropping the cost of the 1080 by $100. Do with that what you will!
Edit: so, they have an 11GB card, attribute a large part to tiling, which AMD now does on Vega...
But this stream is so screwed up, I may have to build on this after...
Edit 2: "we improved the cooling and now it runs at 35% faster than 1080, overclock over 2ghz on stock fan, cost $699, overclock it 15-20% faster than that, release next week."
Edit 3: So, $50 more than the 980 Ti at release.Last edited: Feb 28, 2017 -
@D2 Ultima - so, could you give a breakdown of the things learned of Vega so far and what the 1080 Ti says about the current situation with AMD and Nvidia?
http://www.anandtech.com/show/11172/nvidia-unveils-geforce-gtx-1080-ti-next-week-699 -
Unless Vega is SIGNIFICANTLY stronger than we think it is, or AMD is pushing Vega to be low-end (replacing Polaris and making them the new RX 560/570/580) while getting ready to pump out Navi later this year or early next year, nVidia won.
1080Ti isn't anything special on its own, but it's getting sent to AIB partners and that means hybrid cards will show up, so it no longer is necessary to purchase a $1200 GPU and $100-$300 worth of watercooling for it... now you get it in a single package for I'm-assuming $800 or thereabouts. Vega's current rumor mill and leaks, comparing it to fiji, put it between a 1080 and TXP. Since most TXP benches are on air, as I had to point out to I think PCGamer's writer who said twin 1080s are better than one TXP as a TXP was not much faster than twin 1080s, and I said it needs watercooling because of how Pascal works, but once done so it rises to a solid ~40-45% faster (especially since they have the same OC limits)... this means that the 1080Ti with a solid WC block would obliterate both the 1080 and Vega... for about half the price of a properly working TXP now.
So, let's wait and see. I doubt AMD is able to release an in-between 1080/1080Ti card which overclocks effortlessly for $400. -
1) the 1080 Ti it's $699, $100 cheaper than you expected.
2) Nvidia attributed about 25-30% of their performance to tiling and dropping unseen pixels, which AMD said they have adopted (this was adopted with Maxwell and is why in raw power, amd could outperform on bit coin mining).
3) they are shooting at overclocking the ram to 11,000, removing ROPs and L2 cache from the Titan card, then relying on better cooling to get the 35%.
4) recently, Vega was compared to Titan XP, not the 1080. They both had similar frame rates, but critics said it was frame limited. This was seen at the Ryzen event with Su a week ago.
5) you haven't discussed the implications of 8GB HBM2 vs 11GB GDDR5X 11K, or, more to the point, throwing more ram got lazy developing (Also the claim that 11k clocking brings the performance almost to HBM2).
6) Vega claimed the high bandwidth cache controller gives +50% more average FPS and +100% minimum fps, a bold claim.
7) the 1080 Ti has 11.5TF while Vega has 12.5. With all the improvements mentioned above, including the adoption of tiling that amounts to 25-30% of Nvidia's performance (according to their slide if apportioned proportionately), would your analysis change, especially considering around the $600-700 range is what AMD has hinted at in price?
8) amd in one game, sniper elite 4, achieved 100% multi-GPU scaling. In addition, they are working closer with developers and software engineers moving forward, investing a significant amount, like you wanted. Does this impact your analysis?
Edit:.
9) the base of Vega is clocked over 1.5ghz, but as it is a different architecture, hard to tell on that...
Sent from my SM-G900P using TapatalkLast edited: Mar 1, 2017Ashtrix, hmscott, Raiderman and 1 other person like this. -
2 - This might be true, but Vega is still GCN... it could be how they got that performance out of that architecture in the first place.
3 - I noticed. I think the card is at the point where it might become ROP bound, but I can't say. It is possible that ROP-heavy titles/scenarios will have Titan X Pascals overtaking the 1080Ti, rather than being more or less equal. 1080Ti has more memory bandwidth than TXP, that's factual. You can OC the TXP though, and you may not be able to OC the 1080Ti very far. So this is an insignificant point to me (it's also only 4GB/s more).
4 - TXP, if not watercooled, has a 10-20% performance cut from normal. Especially if they didn't ramp fanspeed up to 100%. Any tests done without average clockspeeds shown on the TXP can be completely disregarded in practice, and the 1080Ti is certainly going to cream any such tests once it comes from an AIB partner.
5 - I think 8GB is fine, and I don't think the 11GB will make that much sense in the end. What I do not understand is how HBM2 expects to act like a cache. It sounds fine in theory, but what this really means is that they want assets to be streamed in/out on demand rather than stored entirely, cutting down "memory usage". This only works if your RAM is very good, your paging file is a good size and on a SSD, your game is on a SSD that isn't your pagefile SSD, and it could possibly hit the CPU harder having to stream such data in and out. If you don't believe me, try loading maps on some games with an OSD on. 9/10 times, loading a level (shoving all the assets into RAM/vRAM/etc) ramps your CPU up to astounding levels. Black Ops 1 hits everyone's CPU to 100% when loading a level for a short period of time, for example. Many other games do it to such a degree as well. Think of this on a bit lesser extent, but happening rather often. This is my GUESS as to what HBM2 being a cache-type feature might do. So this is one matter I'll need to see in practice. Also, assuming 500MHz on a 4096-bit bus is HBM2's speed/setup, then 512GB/s is its bandwidth, which is still higher than the TXP/1080Ti. Also, while Pascal has an even further improved memory architecture over Maxwell, making bandwidth for bandwidth not a good comparison, HBM2 is not actually out yet on Vega and it might even be possible that Vega launches with even better colour compression etc. So bandwidth is even more blurred to calculate.
6 - If you're vRAM bound, it certainly can... but remember the rules I gave above to get such a sweet scenario. If you're in a game that neither stresses amount nor stresses bandwidth, then you'll probably find no benefit from HBM2 cache in itself.
7 - AMD teraflops and nvidia teraflops cannot be compared with each other. I never considered them interchage-able. When it comes to games, there's lots of other factors that apply, and teraflops has never been a good example. Most of nVidia's cards have beaten AMD at games with significantly worse Teraflops, for example. 780 was under 4 teraflops (base clock) and 290 was 4.9 teraflops (up to peak perf), but the cards mostly traded blows. Or the 980TI (5.63 teraflops) versus the R9 Fury X (8.6 teraflops!).
8 - I've seen 104% scaling in Firestrike before. But I've said it before and I'll say it again: Crossfire is worthless. It needs fullscreen to be used. It has no NVPI counterpart to force mGPU on anything that does not directly support it (goodbye lower profile indie games!). The number of Crossfire profiles is far lower than the number of SLI profiles, and the number of SLI profiles is rather low these days. In this case, I can't see Crossfire ever being worth the money. At least with nVidia I can do other things with my second card if I really wanted. Not so for AMD.
My analysis was actually made with all of that already taken into account. Either Vega replaces Polaris in the hierarchy and Navi takes top stop to duel Volta (since Vega STILL isn't out), or Vega is *SIGNIFICANTLY* better than AMD is letting on. If not, then Vega flagship (estimated between 1080/TXP) is going to need to cost about $400 since 1080s dropped to $500 MSRP and hybrid cards were only about $50-70 more than MSRP a short while ago. -
Also, the virtual memory allowed on the HBCC is some insane amount like 512TB. This might be hinting at them, with the highest end, using a similar controller for deep learning...
But, I did want to get the deeper details on your thoughts, which is why I asked. Thank you for the explanations!
Sent from my SM-G900P using Tapatalk -
Nice discussion, just a few clarifications. Vega is upper-mid/lower-top to high-top tier architecture. Navi would scale all the way from bottom line mobile part to the absolute top, whatever that translates into performance. It's still GCN, but it doesn't mean much, for all I know Maxwell and Pascal are barely different from each other, yet they carry different names. Haven't paid much attention to nGREEDIA's die shots, maybe I should. I'm certain that they haven't got a total redesign for a while as well. Could be wrong.
-
Sent from my OnePlus 1 using a coconut -
Meanwhile, what is your opinion of AMDs implementation of their new primitive shader sitting under the normal path and of the changes to the rasterizer and sacrificing certain tiles and non-visible data in light of the prefetch of Ryzen?
Edit 2: Also, Nvidia won't have a huge die shrink to fall back on (going from 16 to 10nm is not huge, if Nvidia adopts TSMCs 10nm next year at all (see the 28nm lifespan))...
Edit 3: If it isn't hUMA, it is a pre-cursor. But considering they will not do a major redesign until 2020-21 and both companies will have it soon, I'm a betting man that it is one of the two.
Also, reading through the articles, the 3 components of why Nvidia says its bandwidth performance is better is raw bandwidth (we already discussed), about 30% to tiling (also discussed above and likely close to nullified between AMDs adoption and the new primitive shader), and compression (this is the one I forgot what was last night, but is something AMD may be able to do comparably with the extra TF)....
Sent from my SM-G900P using TapatalkLast edited: Mar 1, 2017 -
As for the node sizes, please remember that while Pascal did indeed benefit from a die shrink, there was no new architecture change. It's Maxwell. Volta CAN be a huge improvement since it's essentially two generations worth of architectural changes (Pascal is improved over Maxwell in other areas than raw processing power). But it doesn't need to be. Either way, Vega isn't getting a die shrink at all either, and Polaris' gains exist basically. Certainly they're adopting some things from nVidia, sure, but the end result is all that really matters. If Vega is barely keeping up with a throttling TXP, then it doesn't matter what tech they shove in it, you feel me?
I don't know enough about primitive shader sitting under the normal path or any changes to rasterizer. Sacrificing tiles and non-visible data seems to work fine on Pascal. Games look as good as or better than on my 780Ms, so I can say it works well clearly. I won't comment on what I don't understand, or haven't experienced, though.
And really, trust me. Disregard the teraflops. It means nothing in gaming, trust me.tilleroftheearth, hmscott, ajc9988 and 1 other person like this. -
Why then, unless for manufacturing, would you show what you have shown knowing Nvidia was going to release the same day (although not realizing the down shift in prices)? This is why I think AMD is doing like they did with Ryzen, do not tip your hand until it is too late. I also think it was used to flush any other information about the Ti with enough lead time for them to adjust their offering with Vega, much in the way Mr. Fox mentioned Nvidia does often to AMD, allowing for firmware and other tweaks.
So please bear with me while I work through the information. Video cards are not my strong suit for analysis. General strategy and specific other hardware components is more my wheel house...
Edit: to put this in perspective, going from $800 to $700 is a cut of 12.5%. I do not know the cost of production per card, but after that is removed, this is likely 25% or more hit to expected margin. By voluntarily taking that big of a hit before the competitors card is even released, it suggests this is a move to squeeze them out of their margin, which does show some fear of performance.
Sent from my SM-G900P using TapatalkLast edited: Mar 1, 2017hmscott likes this. -
My reasoning is that if Vega's best only works to trade blows with a throttling TXP, then it'll never touch a 1080Ti (since a 1080Ti will perform like a proper-functioning TXP). Hence my saying multiple times: either Vega is MUCH stronger than has been leaked, or what has been leaked is intended for replacing Polaris (whether that means Navi early 2018 (2018 is its roadmap) or big Vega is still under wraps), or Vega is going to need pricing around $400 USD for the flagship high end part. There is zero point pushing a $600-$700 non-overclockable GPU that doesn't match the bigger card within its price bracket. Also, a hybrid 1080 might be capable of OCing to bridge that gap (+30%) for under $600 without much problem. I know most users don't overclock, but in the case of Pascal, it's literally hitting +200MHz on MSI Afterburner and you're pretty much good to go with no voltage adjustments needed. So Vega needs to be priced to combat that as well.
One thing is for sure, though. nVidia is fighting to keep market share. This is the same thing that happened with the 980Ti. Fiji was scary so they went all in and brought out a real contender. Except this time it's $500 less instead of $350 less, and 1080 MSRP dropped to 680's, which is less than the 980's MSRP ever was, even when 980Ti cards were literally under $100 more than 980s. They WANT that marketshare. But Vega is so late; Pascal is almost a year old already. Even if Vega outpaces the 1080Ti by a bit (counting overclocking) Volta is realistically under a year away, and GV100 needs to be out this year. If GV104 doesn't beat GP102, then they can always launch GV102 in consumer cards early 2018. It won't even be a year and AMD has been SO SLOW in putting out their cards.
Polaris was demoed in 2015, and it took until end of 2016 to even see light. Vega was known working on since then, and has been demoed in mid 2016. It's now the end of Q1 2017, and Vega ain't coming out anytime soon. It's closer to maybe the middle of the year by guesstimates, which means there's a problem. I can see nVidia's marketing strategy here, and the fact that they've stopped robbing people blind for GPUs also helps their sales quite a bit. Buy up 1080Ti now! Have no money for Vega! Etc. It's sound, and the majority of the public will bite. I know people with 1080s who want 1080Tis after buying 1080 Scammer Editions for $700 on launch, not even realizing the massive rip off, ready to spend another $700 for the same architecture to get about 45% better performance, with most of them not even using high refresh or 4K screens. It's downright silly what the idiot public does. But this is what gets em sales, and then people are gonna be all "I'm broke" later on.
Just remember. Paper specs mean nothing. Fury X in some scenarios performed like 390X because it was ROP limited, rest of the specs be damned. What matters is when it comes out and you toss games at it and see how it handles. It's possible Vega can cream Pascal, but we don't know. However, from my point of view, AMD is now too late unless they announce Vega *AND DEMO IT* before the week beginning March 5th, showing that it creams TXPs under water, and giving it a proper good price. Otherwise the people willing to shell out for a new top end card that were waiting for 1080Ti will simply do so and the card will sell out, a-la 1080 launch period. Then when Vega launches in a couple months people have already bought the new lower-priced 1070s and 1080s, or the 1080Ti. And be broke.
AMD is taking too long to be profitable. If this were near Pascal's launch and they wanted to wait 2 months I'd say go ahead. But at this point where we're past the half life of Pascal, the longer they wait is the closer Volta is, and even if Volta is only 30% faster than pascal, if Vega can't compete with top end Pascal, then the minute volta launches they're done, stick a fork in em. -
It's weird that the Pascal GPU will boost up high even around 80c, but it's power delivery isn't stable much above 50c.
Gamers Nexus and Jayztwocents had videos showing that behavior, water-cooling or hybrid cooling really made a big difference on performance stability.
What about Vega? I would assume similar fab would produce similar need to keep the GPU cool. So Hybrid or water-cooling will be a must for those GPU's too. -
Delidding your Ryzen...
TomJGX, OverTallman, HTWingNut and 7 others like this. -
Edit : Key points - Thick PCB, Proper IHS (no JOKE TIM jobs), Direct Die cooling might be possible due to PGA.Last edited: Mar 1, 2017 -
-
-
They cannot squeeze more out of Maxwell than they have. Maxwell was at the absolute limit of 28nm for nVidia. 20nm showed no benefit worth pursuing. Pascal is maxwell with the benefits in efficiency and clockability granted by not one, but TWO die shrinks. Volta must be a new arch. Hence I expect a sizeable jump.
Sent from my OnePlus 1 using a coconut -
Edit:
http://wccftech.com/nvidia-volta-12nm-finfet/
Even though the nomenclature is wrong on the "2080," it shows the likely use of 16ff++ at TSMC for the refresh and memory improvements, which were just announced. If it is true that the Volta flagship will be on either 10 or 12nm and that Vega is working on the 7nm variant for late 2018, with Navi being on that node for 2019, I, once again, don't expect a lot out of Volta, but do from Vega and Navi in the coming years....
https://www.fool.com/investing/2017/01/10/dont-expect-nvidia-corporation-to-launch-volta-arc.aspxLast edited: Mar 1, 2017 -
Back to Ryzen:
https://drive.google.com/file/d/0By7_TCzoyL9oejlBV1R0WVdKMkU/view
Wally west at Overclock.net posted this, which are screen slides for tomorrow... -
H = High Power Mobile
U = Standard Mobile
M = Low Power Mobile
The Model Number Architecture Legend, Power Suffix specifically mentions mobile:
Raiderman, Atma, Papusan and 1 other person like this. -
Saw that as well, but nothing else mentioned i.e. no info on the socket, but I think we all know how it would be. No info on the core count as well, but I guess all of this is for the summer presentation (expected H2 launch).
As for Vega, rumors suggest early April hard launch. They better "leak" some info beforehand, nGREEDIA's cards are on the table after all, so they can do some math during the weekend. -
AMD Ryzen 7 1700 OCed To 4.0GHz - http://wccftech.com/amd-ryzen-7-1700-3dmark-multithreaded-benchmarks-leaked/
and some 1700x gaming benchmarks leaked here: http://wccftech.com/amd-ryzen-7-1700x-review-leak-gaming-overclock-benchmarks/
Last edited: Mar 1, 2017DukeCLR, Charles P. Jefferies, ajc9988 and 2 others like this.
AMD's Ryzen CPUs (Ryzen/TR/Epyc) & Vega/Polaris/Navi GPUs
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.