The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Volta: NVIDIA's Next Generation GPU Architecture (2017-2018)

    Discussion in 'Gaming (Software and Graphics Cards)' started by J.Dre, Aug 14, 2016.

  1. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Likewise, sorry on this end, given your situation, it's understandable.
     
    ajc9988 likes this.
  2. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    I'm not sure I trust AdoredTV to give an objective overview of Nvidia.
     
  3. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Volta in 2017? | Vega 56 BIOS Flash to Vega 64
     
    Last edited: Sep 10, 2017
    Aroc likes this.
  4. Carrot Top

    Carrot Top Notebook Evangelist

    Reputations:
    74
    Messages:
    319
    Likes Received:
    274
    Trophy Points:
    76
    AdorkedTV is the biggest AMD fanboy/Nvidia hater on YouTube who has a following. Somewhere along the way he assumed the helm of team red's cult leader. I blame Reddit mostly.
     
  5. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    It's clear that neither of you watched either or both of the excellent GPU History video's.

    Excellent historical recollections of the important turning points for Nvidia and ATI / AMD / Radeon GPU's, with excellent details and thoughtful consideration on all of the important GPU's over many years.

    If you didn't have the attention span, or will to listen in detail and think about the points and conclusions, that's fine, why waste our time with your thoughtless comments? Just let it pass and not comment at all next time, ok?

    For those that haven't seen the video's yet - you've done them a disservice, for those of us that watched and enjoyed the video's you look careless in your comments, and telling us it's not worth paying attention to what you have to say in the future.

    The two video's are excellent, detailed, and well presented - if a bit long and drawn out - but I like that kind of in depth detail so for me they were well worth the time. If you are easily distracted or bored, don't even try. :)
     
    Last edited: Sep 11, 2017
    TBoneSan likes this.
  6. Carrot Top

    Carrot Top Notebook Evangelist

    Reputations:
    74
    Messages:
    319
    Likes Received:
    274
    Trophy Points:
    76
    Ignoring Adored's character and colorful Internet history (you may or may not have seen his more inflammatory side in some forums and comment/discussion threads) for a sec, you might have a point, if the videos were actually any good.
     
    Robbo99999 and hmscott like this.
  7. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Geez, you keep digging a deeper hole for yourself, just watch the video's you might learn something :)

    He's not out to bad mouth Nvidia, or praise AMD in the video's, they are fair balanced detail filled historical coverage of the Nvidia + AMD comparison's at the time.

    They are well done, but they aren't action drama's, or scifi romp's, they are factual coverage of historical information. Boring to many I am sure.

    To say AdoredTV was tilted in coverage, or that it's AMD fanboy coverage is so far from wrong, but you wouldn't know that since you didn't watch the video's :)
     
    TBoneSan likes this.
  8. Carrot Top

    Carrot Top Notebook Evangelist

    Reputations:
    74
    Messages:
    319
    Likes Received:
    274
    Trophy Points:
    76
    Not wasting my time with some nearly hour-long video series of information I already know, presented to me in a non-objective and deliberately misleading manner.

    AdoredTV is probably one of the prime examples of why I hate the YouTube format for tech literature compared to written.
     
    jellygood likes this.
  9. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    But, and again, I can't over stress the obviousness of this to all of us, you haven't watched the video's, so your critique is irrelevant.

    You are not fooling us, but you seem to be fooling yourself. :)

    The objectiveness isn't in question when the data is clearly backed up with factual information and trade review comments and data from others - not him - gathered from the time of the GPU's being discussed.

    His conclusions aren't in favor of AMD, and are in favor of Nvidia, so you are completely off base here, simply because you don't know what you are talking about, because you haven't watched the video's.

    So, either watch the video's, or please stop wasting our time with your made up comments. :)
     
  10. Carrot Top

    Carrot Top Notebook Evangelist

    Reputations:
    74
    Messages:
    319
    Likes Received:
    274
    Trophy Points:
    76
    TBH I don't care at all what you think. Just thought people should be aware of who Adored is and what his agenda is. I unsubbed and wrote him off already, same as with Linus. When the ratio of good videos/content to bad is that low it's not worth the bother.

    So you're welcome to give me a TL;DR or timestamps. Isn't it funny, the dude could do a whole hour of masturbatory rambling yet couldn't be arsed to link timestamps in the description. But I'm not sitting through the whole thing. Some of us have time and lives, even if you don't. (Yeah hiding your profile won't change that you basically live here ;)).
     
    Robbo99999 likes this.
  11. Prototime

    Prototime Notebook Evangelist

    Reputations:
    201
    Messages:
    639
    Likes Received:
    883
    Trophy Points:
    106
    Can we bring the intensity of this conversation down about 12 notches? This is a discussion about an upcoming GPU, not a meta-discussion about how people should talk about it. If you don't like how someone is/isn't watching a video, please ignore it and move on. Clogging this thread with critiques of other forum members isn't constructive in the slightest.
     
  12. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yet, that's exactly what you just did, clogging up the thread, critiquing other forum members.

    At least our dicussion is on topic. :)
     
  13. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    You are wrong to apply your fantasy of what his agenda is, because it's contrary to the conclusions given in the video's, you are totally wrong and spreading FUD about his video's that are baseless. Your conclusions based on your prejudice against him is complete fantasy, not connected in any way to the content.

    Timestamps for what? Each GPU discussion? It's a continuous historical discussion about the progression of GPU development over many years.

    You have spent far more time replying to me than it would have taken to watch the video's.

    At least my time was spent watching the videos resulting in my knowing what I am talking about, while your time has been spent continuing your ignorance, and proving your ignorance to the rest of us.

    Your fantastical imaginings about his agenda are incorrect, and watching the video's will set you straight. :)
     
    TBoneSan likes this.
  14. Ionising_Radiation

    Ionising_Radiation ?v = ve*ln(m0/m1)

    Reputations:
    757
    Messages:
    3,242
    Likes Received:
    2,667
    Trophy Points:
    231
    Once again, talking about how to watch the video in question, rather than what the video is about in the first place... is not constructive. We're going in circles here. @Prototime attempted to mediate, but was shot down, rather snootily and haughtily.

    Come on, gents, the topic says Volta: NVIDIA's Next Generation GPU Architecture (2017-2018), not AdoredTV's YouTube Channel And Whether Or Not He's An AMD Fanboy.

    If people choose to not watch a two dozen minute-long video, or choose to watch it, it says nothing of them, or their character, or their daily life and what percentage of it they spend online.

    Different people choose and prioritise different things, full stop. Let's get back on topic, please.

    For the detractors, AdoredTV himself has posted a video about his disappointment with Vega, and even gave it a witty, sarcastic title:



    This isn't exactly the work of an AMD fanboy, who would sooner convince him/herself that Vega is an all-round good product (not for me, it isn't).

    Now, back to Volta...

    I think the cut-down GV104 chip (i.e. whatever the next XX70 card will be called) will outdo the current GP102 chip, even if the nanometre count wouldn't have really decreased. I mean, look at the efficiency improvements nVidia managed to pull off with Maxwell versus Kepler, for example. A likely release date—squarely in the middle of Q2 next year.
     
    Prototime and hmscott like this.
  15. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    What I was pointing out was the folly of criticizing a video without watching it based on an pre-judged opinion of AdoredTV and the video's he creates, which clearly no longer applies, if it ever did at all, @Kevin @Carrot Top
     
    Last edited: Sep 11, 2017
  16. Miguel Pereira

    Miguel Pereira Notebook Consultant

    Reputations:
    11
    Messages:
    197
    Likes Received:
    160
    Trophy Points:
    56
    I doubt that the jump will be so big honestly. The MHz most likely won't jump above the current 2ghz so without a size reduction the jump will be more humble I guess.

    From Kepler to maxwell the MHz went up a fair bit, don't forget that.

    Enviado do meu MHA-L29 através de Tapatalk
     
    Robbo99999 and hmscott like this.
  17. ThePerfectStorm

    ThePerfectStorm Notebook Deity

    Reputations:
    683
    Messages:
    1,452
    Likes Received:
    1,118
    Trophy Points:
    181
    I really hope that the 2080/1180 (full GV104) has a lower thermal density than the 1080 (as @D2 Ultima has said is one of the reasons why Pascal is a hot architecture) by being bigger physically in sq mm. Then things like an updated Tornado F5 will be able to provide people with 15" laptops a real boost, especially if they are upgrading from Maxwell or earlier.

    Sent from my SM-G935F using Tapatalk
     
    Ionising_Radiation likes this.
  18. Ionising_Radiation

    Ionising_Radiation ?v = ve*ln(m0/m1)

    Reputations:
    757
    Messages:
    3,242
    Likes Received:
    2,667
    Trophy Points:
    231
    I have. There's nothing new there, honestly—GTX 1080 uses a cut-down GP104.

    What do we want, a leviathan 815-square millimetre die for the GTX 1180? For some perspective, that's bigger than the entire PGA package of my 4710MQ. If such a GPU is pushed to the clock rates that gamers demand, then we'd need a quad-slot cooler and some outrageous power delivery. Or we may as well directly sit a CM Hyper 212 Evo on the GPU and call it a day.

    AdoredTV is detailed, but he forgets that the decreases in node size eighteen years ago were twice as big as the entire node size itself we have today (220 nm to 180 nm, 180 to 150 nm vs 16 nm, 12 nm). It is not easy to pull off such massive increases in performance with such narrow margins, hence nVidia's focus on efficiency rather than raw performance. Hell, he even called nVidia's skipping of the GTX 800 series dubious, which proves that for all of his research, he neglected to research about, or did, but ignored the devices this very forum is about—notebooks.

    He drilled into so much detail about what were mere refreshes early on, but completely forgot about the ridiculously efficient first-generation Maxwell—GM107—in the GTX 750, 750 Ti, 860M and 960M. Methinks he got fatigued and slightly frustrated after having done so much research early on, and couldn't be ar*ed to do it as thoroughly nearer the end.

    Easy to ask it of the companies, difficult for the engineers working in those companies to pull off 100%, 130% increases nowadays. I'm thinking Intel isn't really sitting on its laurels (well, that's about 30% of the cause, definitely), but more of it's just bloody difficult to fit circuits onto increasingly smaller nodes while maintaining the same gains in efficiency and preventing quantum tunnelling. At the 100-200 nm nodes that wasn't even a problem, now it is a very significant issue that has completely upended Intel's clockwork-esque releases.

    AdoredTV's ending note was rather unsophisticated, to say the least.
     
    Last edited: Sep 11, 2017
    hmscott likes this.
  19. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,653
    Trophy Points:
    931
    Yoo expect same efficiency improvement as 28 down to 16nm tech? :rolleyes:
     
  20. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    (Maxwell was 28nm too, but that's probably not the point you're making, but for clarity's sake I'm mentioning it.)
     
    Ionising_Radiation likes this.
  21. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,653
    Trophy Points:
    931
    Sorry, I meant Maxwell vs. Pascal :(
     
    hmscott, Vasudev and Robbo99999 like this.
  22. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    You completely missed the point and the question at hand, was AdoredTV's coverage and conclusions that of an AMD fanboy, or were his conclusions fair and balanced, with no favoritism for AMD and no outsized condemnation of Nvidia?

    The material was a fair and balanced coverage that didn't favor one brand over the other, and for someone new to the subject there wasn't any need to forewarn potential viewers of any imagined built-in bias of AdoredTV as @Carrot Top or @Kevin felt was necessary.

    That's the BS I am exposing to stop before more such BS prejudice spreads.
     
    Last edited: Sep 12, 2017
  23. Ionising_Radiation

    Ionising_Radiation ?v = ve*ln(m0/m1)

    Reputations:
    757
    Messages:
    3,242
    Likes Received:
    2,667
    Trophy Points:
    231
    I ignored the 'point' and the 'question at hand', because they are unrelated to the point of this thread—nVidia Volta, and what history tells us about it. Whether or not AdoredTV is biased (I doubt he is, but he can get long and ranty, as demonstrated above) or not, matters only to the extent of whether or not the source is accepted and legitimate in its content, and that discussion certainly isn't worth five pages of insults, ad hominems and so on.

    When I said 'nothing he says is new', it already means that given current facts, AdoredTV's video is an accurate, unbiased source ( albeit with pro-AMD overtones), because it corroborates known facts today. No need to say anything more.

    Mayhap you could read between the lines a little bit more yourself :)

    I don't want to waste any more thread space discussing the source itself rather than the content of the source in question, that has been done. Biased or not, I would still read/watch/listen to the source and form my own opinions. Saying 'this guy has an agenda, don't listen to him' already stifles debate. It's why I consume everything from Fox News, Breitbart, Buzzfeed News to CNN, BBC, Reuters, AP and AFP, regardless of the source quality and content.
     
    Last edited: Sep 11, 2017
    Vasudev, hmscott and Prototime like this.
  24. jellygood

    jellygood Notebook Consultant

    Reputations:
    17
    Messages:
    137
    Likes Received:
    46
    Trophy Points:
    41
    What utter nonsense and irrelevant ego-race this topic has endured the last 2 days...
     
  25. Miguel Pereira

    Miguel Pereira Notebook Consultant

    Reputations:
    11
    Messages:
    197
    Likes Received:
    160
    Trophy Points:
    56
    Guys... That's a bit too much isn't it?

    On-topic: my guess for volta is that pascal and its similarity to desktop parts was a one time deal.
    For volta we will have M versions and N versions at an even bigger premium. Think 980m and ful-fat980.

    They are already testing the field...

    Enviado do meu MHA-L29 através de Tapatalk
     
    hmscott and Ionising_Radiation like this.
  26. Ionising_Radiation

    Ionising_Radiation ?v = ve*ln(m0/m1)

    Reputations:
    757
    Messages:
    3,242
    Likes Received:
    2,667
    Trophy Points:
    231
    Hm. I don't think that's likely. People have tasted what desktop 'parity' is like, they won't go back.
     
    hmscott likes this.
  27. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,653
    Trophy Points:
    931
    I fix it for yoo :) Max-Q is the replacement for M :D Max-Profit.
     
    hmscott likes this.
  28. Miguel Pereira

    Miguel Pereira Notebook Consultant

    Reputations:
    11
    Messages:
    197
    Likes Received:
    160
    Trophy Points:
    56
    Exactly, so now they are willing to pay more for "parity" or accept the lower grade M (Max-Q?) part.

    I hope I'm wrong, but I'm a business men, and that is what I would do in their place.

    Enviado do meu MHA-L29 através de Tapatalk
     
  29. Ionising_Radiation

    Ionising_Radiation ?v = ve*ln(m0/m1)

    Reputations:
    757
    Messages:
    3,242
    Likes Received:
    2,667
    Trophy Points:
    231
    You have a point there, and that's not a good thing.

    This year is generally a lousy year to upgrade—expensive GPUs, expensive RAM, expensive SSDs, and not very good CPU improvements.
     
    bennyg and hmscott like this.
  30. Prototime

    Prototime Notebook Evangelist

    Reputations:
    201
    Messages:
    639
    Likes Received:
    883
    Trophy Points:
    106
    Agreed 100%. That's my biggest concern about Max-Q: it will either displace or raise prices on near-parity cards, especially in the mid-range. We'll see what the landscape looks like when Volta arrives, but I'm not holding my breath.
     
  31. Ravern87

    Ravern87 Notebook Consultant

    Reputations:
    22
    Messages:
    145
    Likes Received:
    99
    Trophy Points:
    41
    I wouldn't mind max q if they went all the way up the gpu line with it. What I mean is I would love to be able to say I have a laptop with a gtx 1080 ti max q or even a gtx 1080 titan max q. Maybe that's what will come next becore Volta release. Nvidia said they had some awesome tech coming down the pipeline before the end of the year. And Volta isn't coming until 2018 so there must be some new gpus coming out that are still Pascal maybe?? I think a more powerful max q card that can hit 120fps on AAA games while running cooler and more efficiently would be a dream. Imagine a gtx 1080 ti max Q. That would be pretty cool but not very likely.
     
  32. TBoneSan

    TBoneSan Laptop Fiend

    Reputations:
    4,460
    Messages:
    5,558
    Likes Received:
    5,798
    Trophy Points:
    681
    I'm pretty sure that's how Max Q is marketed to kids. People want to "say" they've got a 108**"in their laptop to impress their friends how thin it is.
     
    hmscott, Aroc and Papusan like this.
  33. Ionising_Radiation

    Ionising_Radiation ?v = ve*ln(m0/m1)

    Reputations:
    757
    Messages:
    3,242
    Likes Received:
    2,667
    Trophy Points:
    231
    Fair point. No use in having 3584 shader cores if they all run at like 700 MHz...
     
  34. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,653
    Trophy Points:
    931
    Why not gimp a bit on TDP, but not as hard as Max-Qrippled and call it what it is → 1080TI ? What a nice name... 1080Ti Max-Q LOL
     
    Vasudev and TBoneSan like this.
  35. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Just like SLI, I think the Max-Q "treatment" should be "allowed" in laptops only for the 1080ti, for a GPU that otherwise wouldn't fit at full power.

    The 1080 can be made to run full power in laptops already, as could the 1070 / 1060, so those are complete rip-off's.

    A 1080ti trimmed out to fit in the best cooling and power large frame laptop in a vendor line would be welcome if it gave 30% more performance than their next down in their line.

    But, I wouldn't call it Max-Q, I'd call it a 1080ti. :)
     
    Vasudev and Prototime like this.
  36. Ravern87

    Ravern87 Notebook Consultant

    Reputations:
    22
    Messages:
    145
    Likes Received:
    99
    Trophy Points:
    41
    Lol thats exactly what i was trying to get at...u just said it better. Hopefully it actually happens...maybe before volta comes?
     
    hmscott likes this.
  37. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,653
    Trophy Points:
    931
    If Nvidia and the OEMs together killed (used same Max-Qrippled treatment) for the 1080Ti's TDP in same way as they did with 1080 (up to 200w) → 1080Max-Qrippled (110w). Aka 81% lower allowed TDP for a new Mobile 1080Ti. I would call new Mobile 1080Ti for destroyed7very crippled chips. Worse than ever!! Then call it 108TI or 1080Ti Max-Q wouldn't matter.
     
  38. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    That's why I said I wouldn't make a 1080ti Max-Q Design laptop:

    "A 1080ti trimmed out to fit in the best cooling and power large frame laptop in a vendor line would be welcome if it gave 30% more performance than their next down in their line.

    But, I wouldn't call it Max-Q, I'd call it a 1080ti."

    A 1080ti trimmed out to fit 250w TDP might be just right :)
     
    Vasudev likes this.
  39. Miguel Pereira

    Miguel Pereira Notebook Consultant

    Reputations:
    11
    Messages:
    197
    Likes Received:
    160
    Trophy Points:
    56
    They could go the gtx1070 way (more shades the the original) and use the full gp102 with 3840 shades active and lower clocks for better efficiency. The could make this happen with a tdp of 250w (?). Or even 200w I guess. The top vendors could make it work in a proper chassis.
    They could call it whatever they want. Even apache helicopter as long as it gave the full potential.
     
    hmscott likes this.
  40. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Definitely Apache Helicopter!! With 'frick'n Lasers!!
    apache-helicopter-laser.jpg
     
    Vasudev likes this.
  41. bennyg

    bennyg Notebook Virtuoso

    Reputations:
    1,567
    Messages:
    2,370
    Likes Received:
    2,375
    Trophy Points:
    181
    A >200W GP102 that stays within the 40dBa MaxQ noise limit... what y'all be smokin
     
    hmscott likes this.
  42. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yup, can we all just drop Max-Q talk from this thread, WTH does it have to do with Volta anyway :)
     
    Vasudev and bennyg like this.
  43. ThePerfectStorm

    ThePerfectStorm Notebook Deity

    Reputations:
    683
    Messages:
    1,452
    Likes Received:
    1,118
    Trophy Points:
    181
    Though I bet the Volta XX80 that reaches laptops will at least be equal to GP102.

    Sent from my SM-G935F using Tapatalk
     
    Vasudev and hmscott like this.
  44. Vasudev

    Vasudev Notebook Nobel Laureate

    Reputations:
    12,045
    Messages:
    11,278
    Likes Received:
    8,815
    Trophy Points:
    931
    I think Volta will have Min V aka Min Voltage and configurable quantity of shaders that is left to OEM to configure. A single card w/ multiple configurations that cost a fortune, so choose your Volta GPU carefully.
     
    hmscott likes this.
  45. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    We're probably 6 to 8 months from having any actual news on mobile Volta.

    This thread has many more, much darker turns to take.
     
    SuperContra, Vasudev and hmscott like this.
  46. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,653
    Trophy Points:
    931
  47. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    It was bound to happen, what with Volta being a year off... Nvidia needs something to throw out there.

    The RX Vega 56 runs like a stock 1080 when tuned, the RX64 above that, so Nvidia is going to have a tough time slotting a 1070ti. If the 1070ti is just under a 1080 then the AMD RX Vega GPU's are still faster, and if the 1070ti is a "cheap 1080" I would have to assume there will be a 1080+ too?

    IDK could be made up completely, it makes no sense to fracture the market with 2 1070's below the 1080, and neither does reducing the price for the 1070 - why when Nvidia are selling all they make would they need to reduce the price?
     
    Ionising_Radiation and Vasudev like this.
  48. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    In the year of our Lord, 2017, we're still linking to WCCFTech as a source.
     
  49. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    I try to find another source to quote, but sometimes they are first, and in those times it's possible they are correct - it's a judgment call :)

    This is definitely one of those things I would have passed on until 2nd confirmation, but it's already posted so responding with a range of reaction seemed helpful.

    You can and did just dismiss it out of hand due to the source, I choose to avoid that unless it's obviously faked. This one comes close to that :)
     
    Papusan likes this.
  50. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,653
    Trophy Points:
    931
    We talk about Ngreedia... No Volta this year and they don't want loosing more sales to the Red camp :rolleyes: Once AMD get better in prices for HBM2 they will push out more Vega.
     
    Vasudev and hmscott like this.
← Previous pageNext page →