The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Clevo Overclocker's Lounge

    Discussion in 'Sager/Clevo Reviews & Owners' Lounges' started by Spartan@HIDevolution, Mar 4, 2016.

  1. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    That I agree with. The other reason to do it is so that you have new releases top to bottom, from high to low end, all about the same time. Navi is not due until 2019 and is on 7nm, and they planned on a top to bottom release, if memory serves. But I would need to review a couple articles for that.

    Actually, the journalist asking the question was citing analysts mentioning a 20% of the overall X86 market share, asking if Su believed the company and GloFo could deliver on the chips necessary to reach that prediction. It is not Su herself saying they will reach it, rather saying she believes the company can deal with the supply chain to do so. Inherent in this is that demand will be high enough to capture that amount. Considering the Server and Workstation are higher cost, but higher margin, area of the X86 overall market, of which they will make large inroads, it is not the only part necessary to meet the analysts predictions for reaching that goal. You ALSO have to make huge inroads in the mainstream. Now AMD has the low end market tied up with the APUs and low end mobile chips. As to making roads into the mainstream, they already have proven that with Ryzen, mainly in mid-tear. Gaming they made inroads, but didn't knock it out of the park compared to Intel. If you look beyond the K series, they have both captured low-end HEDT segment and compete well against ALL NON-K chips. So you are ignoring all analysis besides against the mainstream flagship in gaming. If you do productivity and light gaming, other than just Office programs, which may switch after multi-thread optimization beyond 4 core is done, then Ryzen wins. It is a mixed use determining which CPU is better. So please read more!

    Actually, the Ti was, by all estimates from all analysts and journalists, supposed to start at $800-900, not $700. $700 was $50 more than the 980 Ti, but completely broke the mold for the current pricing trend. Of course everyone expected the 1080 to have a price cut, just not to $500. So, the extra amount was seen as getting leverage to cause AMD to have to cut potential margins and price the product between the Ti and the 1080, something seen regularly in the competition between the two companies dating back over the past decade. If it stayed there, I would agree. But, they then broke with releasing the Titan Xp so quickly. They usually give a quarter dominance, at minimum, to the Ti before the new Titan card, sometimes upwards of 6 months. Everyone commented on that being odd, but considering the C1 benchmark in SiSoft at 1200 beating the 1080 soundly, but losing to the Ti, released on March 11, 6 days after the Ti release, followed by the C3 release a couple days later with even better scores, but the speed being obscured as read as 300mhz, suggested that when fully clocked to 1500+, it could beat the Ti. When you combine that with the quick release of the Titan Xp, then you see Vega being a contender (taken with the analysis on straight line scaling with speed). But, moving the release of a card from Q1 2018 to Q3 2017, not Q4, like you suggest, means having to reschedule wafer production and use at TSMC to be able to get those cards ready, considering we are at the start of Q2. That is no easy feat! It isn't just "moving it back to the original date it was rumored to come out on." There are way more moving parts involved, hence my prediction it will be 16nm instead of 12nm. They don't own their own Fab. They cannot just switch release with it being this close on a dime. It takes a lot more, but also means tape out had to have occurred sometime Q1. Considering it is only 1 or 2 cards, that makes more sense instead of the entire lineup.

    Addressed in last answer.

    Edit: also, I've read the articles suggesting old tech, but they are using HBM and HBCC, something Volta is to get eventually. By moving it up, they are using GDDR5X because they can't move to HBM yet! By adopting what gave Nvidia an advantage, then having more, only people not looking at numbers and looking at propaganda would buy into it.

    You don't understand having money tied up in other market segments, do you? If you lose market share, you only want to cut margins to the point where it retains market share more than lost margins. Any more than that and you bleed. The tech industry has been contracting for years except in a few segments, including mobile and server. As such, you only have so much you can do, regardless of revenue, before investors sell off because of too large of a loss due to actually having to compete. Never mind each company bleeding in other markets, like Intel in mobile and IoT, so much so that they are now marketing out Fab time on their processes. Further, Nvidia is focusing the spare cash on AI and the automotive market much moreso. Their market float raised by 260%, while AMDs float was over 300%, even though Nvidia lost market share on GPUs. Considering this, it is not so black and white. You cannot spend yourself into oblivion and if AMD prices all of their products correctly, which they seem to have done with Ryzen and the 16-core/32-thread for the HEDT expected to run around or a bit more than $1000, Intel has to cut deep into their margins considering the 10-core runs $1700.

    When you have a big ****, you don't have to pull it out and show everyone that you have it! They kept Ryzen quiet and it worked. They kept Vega quiet and scared the **** out of Nvidia. The people that choose blindly on brand would buy the Ti anyways, even if AMD won. But, Nvidia knows their sales, which is why the change in schedule. This means a large portion are waiting for third party cards with better cooling and wanting to see Vega's performance. The silence doesn't mean what you think it means. If you look, AMD took a different route this past year to now trying to not overstate performance and directly limiting leaks. That is more telling than your insinuation that they MUST leak it if it is any good. I already told you about the leak on March 11th and 13th showing it will perform. So they fulfilled your stupid leak requirement, you just are too ignorant to look at and analyze what was shown, blinded by the Green! So GTFO!
     
    Last edited: Apr 18, 2017
    Jon Webb, Papusan and iunlock like this.
  2. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    They took the blame for the thread scheduler on Ryzen release, which later AMD stated it was their fault. Just giving an example, like you asked. ;-)
     
    TBoneSan, Papusan and Mr. Fox like this.
  3. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,213
    Messages:
    39,333
    Likes Received:
    70,628
    Trophy Points:
    931
    I stand corrected. Thank you. Maybe there are even more examples that I don't know about.

    It still does not make up for their unethical approach on other things.
     
    Jon Webb, Papusan and ajc9988 like this.
  4. cookies981

    cookies981 Notebook Evangelist

    Reputations:
    55
    Messages:
    421
    Likes Received:
    180
    Trophy Points:
    56
    You don't seem to understand what cash on hand means. You can look up how much they have going back years. While they are spending some of that money on other things, their balance has consistently been higher than AMDs entire yearly revenue.

    Intel can afford to cut their prices and still make a profit, same thing with NVidia. AMD cannot. Nvidia cutting their prices on the 1080 and releasing the Ti puts the ball back in AMDs court, if they release a product that matches the 1080 performance at a similar price point, then great. If they release one that doesn't then AMD has to cut their prices even more which hurts their bottom line far more than it's competitors.

    Lol they kept Ryzen quiet and it worked, what world are you living in dude?

    And no if AMD had anything to show they would have showed it at their "big spicy reveal" that they hyped up like crazy. Instead they've been all silent which means they either don't have anything amazing to show, or they're incompetent. And no those two pathetic leaks of compute performance don't mean jackshit especially when its a company that's giving up a ton of sales to a competitor so they keep up the surprise!!! Even more so when its a company that has consistently proven to be unable to compete for the past few years...hype doesn't cut it.

    But keep pushing the "blinded by the green" ******** defense because its really the only argument you have. I'm no fan of NVidia or intel, the faster they get their ass handed to them the better it'll be for everyone. Like I already said, I'm going to bookmark this thread and laugh so hard when vega doesn't out to be the amazing wonder GPU you keep thinking its going to be.

    Once again answer the question. AMD themselves, nobody else, hyped up a big reveal back in Feb. What happened? Why the silence after NVidia announced the 1080 Ti the day before?
     
  5. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,213
    Messages:
    39,333
    Likes Received:
    70,628
    Trophy Points:
    931
    Seems like what AMD has just released for new video cards is really pathetic. The RX 580 is a competitor with GTX 1060. And, it apparently sucks at overclocking. I hope that history does not repeat itself with Vega. Even if they release something that effectively competes with GTX 1080 Ti if it sucks at overclocking it's not going to be a viable option. A CPU or GPU that performs well stock, but doesn't overclock well, is an unacceptable product to me. A belly button experience is what I would expect from a game console, not an enthusiast PC.

     
    Jon Webb likes this.
  6. Jon Webb

    Jon Webb Notebook Evangelist

    Reputations:
    148
    Messages:
    315
    Likes Received:
    455
    Trophy Points:
    76
    I ended up ordering both. Any clue how long those take to get delivered? Nice heatsink mod video. If I didn't watch that I would of ended up taking the map gas torch to it. Probably would of been ugly.
     
  7. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    First, Polaris and Vega are completely different beasts. It is like comparing Kepler and Maxwell. You just really don't!

    Now, for OC, I do agree and that is why I'm waiting for performance reviews (as well as looking to driver support and software partners using the Duo card to design crossfire based game support). If it doesn't overclock well, I will be specifically looking whether or not it does so on air vs water vs extreme. Remember, unless you keep it under 50C, Nvidia's Pascal (Maxwell die shrink with Boost BS) you cannot reach the top. So I readily put those caveats on whether Vega rocks a win.

    Edit: I forgot to mention, newer, more refined processes on the same node not only give more performance, even if minor, but also cuts costs of production! This leaves a larger margin to be had.

    Cash on hand affects distributions to shareholders. If the amount of cash on hand declines to a point where other companies in the NASDAQ offer more return, then institutional investors switch positions, thereby changing the market float. It DOES NOT JUST MATTER IF THE MONEY CAN BE SPENT, IT STILL EFFECTS OTHER CONSIDERATIONS ON BUY/SELL DECISIONS! You have to look at how much you can do without effecting that as well as a plethora of other considerations. Why else do you push up a release that far? So that you can sell at a higher margin, thereby offsetting what is lost on the Ti and the Titan Xp AND trying to shore up cash on hand so that it does not overly effect distributions so much that it has an effect on investor's decisions while having a pricing war. Your two-dimensional analysis is tedious.

    And as for Ryzen performance, that was held VERY tight to the chest, with a leak around August, then a leak in December. Other than that, you got practically nothing on leaks. Compared to AMD's history, that is very good.

    As to Capsaicin and Cream event, Nvidia decided to hold their event the same day and announced it last minute. This affected the event. But at that event, they demonstrated how the HBCC affected both average and minimum frame rates, by 50% and 100%, respectively, if memory serves. They also focused on VR performance, which should also be considered in light of the wireless VR company they just acquired. Further, they detailed tiling and shader pathways, including a primitive shader sitting under the normal shaders (which Nvidia switched purely to a primitive type which is why Maxwell performance in certain instances went down and why AMD cards ruled at BitCoin mining). It was at Nvidia's event that same night that Nvidia's stream was pure ****, pausing and breaking like hell, but where they detailed how much of performance boost was given by the exact items that AMD said they just incorporated for Vega. Trust me, I watched both events.

    This was also where AMD showed crossfire scaling reaching 100%, something considerable as Nvidia only gets a bit above 70%. But note, as D2 and others have pointed out, with Nvidia, you can force SLI, something not possible with crossfire for 5 years or so. That is a plus in Nvidia's column and should not be ignored.

    Considering AMD presented in the morning, Nvidia at night, holding back some of the stuff makes sense. You could see guests from other companies tripping up on what they were allowed to say. That wasn't because they were not prepared, it was most likely by last minute restrictions, especially in the parts related to the Server GPU market. You don't go full bore when Nvidia could hear it and tweak the upcoming release in their firmware. Playing it strategically and limiting it when Nvidia played that game at the last minute makes strategic sense.
     
    Last edited: Apr 18, 2017
  8. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,213
    Messages:
    39,333
    Likes Received:
    70,628
    Trophy Points:
    931
    Yes, I agree. Everything will hinge upon whether or not it is a solid overclocking beast. I do not want to see them in a repeat of their checkered past. AMD has had a few GPUs that hold their own fairly well against NVIDIA running stock, then everything changes and they get slaughtered by NVIDIA once the overclocking begins. That's totally not OK. The best approach is to always wait and see before spending money, and their track record is not a good one. Rolling the dice and hoping for the best would be foolish and ill-advised with an AMD product. If it does overclock well, hopefully durability would not be an issue. It has been in the past. That might be impossible to determine... only time would tell on durability.

    This kind of thing is pretty disheartening. Seems Ryzen doesn't work very well with GeForce GPUs, so their future success depends on delivering something really special in terms of GPU performance excellence. To what extent NVIDIA shenanigans play into this is anyone's guess. I suspect it does because NVIDIA is run by crooks, (and they have some willing accomplices,) but at the end of the day the higher score wins regardless of the conspiracy theories.

    Ryzen 1800X Powered 1080 SLI Desktop #1 Scores vs P870DM3

    http://www.3dmark.com/compare/3dm11/12107157/3dm11/12126406#

    http://www.3dmark.com/compare/fs/12214051/fs/11418188#
     
    Last edited: Apr 18, 2017
    Papusan and ajc9988 like this.
  9. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I do agree. Back with the 6000 and 7000 series and Nvidia's abortion 400 series were going against each other, the tables were turned. But, this type of back and forth is common for the time before that. Nvidia has been dominant for 5 years though (unlike Intel that has 10 years).

    Now, once again, you point to benches favoring IPC over threads. This is why Ryzen really makes a difference at uses and certain workloads. The better the multi-threading support, Ryzen picks up a lot. If you want to do something running in the background while gaming, including streaming, Ryzen wins.

    Now, the question of Nvidia vs Radeon cards with Ryzen gave mixed results, with the Radeon cards often performing better with software with better multi-threading performance. A couple articles and videos covered this. But, that means all games that did not recently come out will not likely be optimized, meaning on GAMING, Intel will stay on top with the top end of their chips. That is use specific, though.

    Also, remember, this is AMD's first shot at 14nm (think Broadwell) and first shot at SMT design, both in the same chip. Considering this socket will be used through 2020, that means refinements will greatly benefit AMD per revision (with a possible 7nm chip coming in 2019-20 when Intel finally overcomes their 14nm+++ (third process improvement, 4th cpu generation on this node) with their 10nm++ Tiger Lake chips). So, next year with better yields and refined process, you could see a drastic improvement in IPC and in overclocking, with the architecture better utilized and designed for in software.

    But scoring lower does not mean failure in all cases. Also, considering the number of server partners picking up AMD GPUs for their work, there is a good chance it has the performance (as early partners already have Vega and want to sing it from the rooftops, but are under NDA). That is server though, not mainstream, but worth a thought.
     
  10. cookies981

    cookies981 Notebook Evangelist

    Reputations:
    55
    Messages:
    421
    Likes Received:
    180
    Trophy Points:
    56
    You're confusing dividends, cash available for distribution, and cash on hand. Nobody here but you is talking about market orders but hey. Keep trying though, you'll get it eventually.

    Here's a simple example. Nvidia is most likely going to still be making a substantial profit with the 1080 priced at $500. Would it be as much as when it was $700-800? No obviously not, but now the ball is in AMDs court. If they kept the price at $800 and AMD releases a GPU that's 90% as powerful for $500 then it becomes a no brainer to go with AMD and Nvidia loses out. By dropping the price to $500, AMD now has two options:
    1. Match the performance of the 1080
    2. Drop the prices on their cards to compete

    AMD cannot afford to do 2 because they simply do not have the capital. Nvidia does, and they can.

    Same thing with Intel. It doesn't cost them $1700 to make that processor, they've just had no competition for so long they've been able to get away with minor improvements and ridiculous price tags. Dropping the price tag is less of a negative for Intel/Nvidia than it is for AMD. It doesn't mean they're scared...it's a perfectly smart business move designed to hurt AMD by forcing them to compete in the one arena AMD really cannot afford to lose...price.
    Still haven't answered my question though.

    There is no way on hell, outside of pure incompetence, a company would stand by and let their competitor get away with a solid 2-3 months of sales especially when their bottom line depends on it. The people who buy 1080 cards (either normal, Ti, or Titan) are likely not going to upgrade to vega unless it turns out to be something like 2x as powerful. Which means AMDs ONLY hope is getting people to wait and hold off on upgrading, which means they have to offer something other than "guys trust me ok"

    You said it yourself, at the original clocks (and yes optimization will play into this) the leaked Vega benchmarks were pretty bad, they had to boost the clocks to be able to compete. Nothing wrong with that except that overclocking a pascal card is pretty damn easy for anyone to do, if AMD really had to increase their clock speeds to compete then what does that mean for their overclocking potential? Would an average OCed 10 series be similar to an average OCed Vega?

    The other thing you're not considering is that Vega will require a bit of time before it gets optimized and we start seeing its true potential. AMD have been kindoff irrelevant for years now in the GPU area and Vega is a pretty big departure from their previous cards so assuming Vega launches next month, you won't start seeing its true potential in games until June or July...which puts it right up and close to Voltas release schedule. And here's the thing....we know more about Volta than we know about Vega. So the question then becomes if you skipped over the Ti and were waiting for Vega...why not wait a month or two extra for Voltas reveal. This is why AMDs silence on Vega is even more worrying....they need to start giving information now if they want people to hold off on an upgrade, and not keep promising hype and reveals at some point in the future.

    AMD was the one who promised a "reveal" of sorts at GDC, AMD was the one who promised it was going to be spicy. Instead they offered absolutely nothing new. So either their staff is totally incompetent (which is possible) or they really had nothing to show.
     
    Last edited: Apr 18, 2017
  11. cookies981

    cookies981 Notebook Evangelist

    Reputations:
    55
    Messages:
    421
    Likes Received:
    180
    Trophy Points:
    56
    One thing I'm really looking forward with Vega are the mobile GPUs. They may not be the most powerful (that remains to be seen) but their prices may finally force Nvidia to reduce the ludicrous cost of their "mobile" chips.

    There's really no reason why a 1080(laptop) costs more than twice as much as the desktop 1080. You could sort off explain it in the past with the M series chips being different to the desktop one but now that they use the exact same chip that argument holds no meaning. Yes I'm sure it probably costs more to get the mobile card to market but there's no way it costs that much more.

    Maybe they can introduce their own MXM equivalent that could potentially force NVidia to start caring about theirs again...one can wish.
     
    hmscott likes this.
  12. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Creator's update, everyone:
    [​IMG]

    (No, that is not my PC, clearly)
     
    Ashtrix, Papusan, ole!!! and 2 others like this.
  13. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    So, let me educate you: a dividend is a distribution, by law. It is a specific type of distribution. How much can be distributed is relative to cash on hand and can be distributed so long as it does not effect the ability to pay liabilities as they arise in the corporation, which would then cause a bankruptcy and a clawback on the distribution to shareholders. So they directly effect each other, although you are correct that the two are not the same, but full of **** that the first two are not the same, as one is a specific form of the other. You're talking to a person who knows securities and corporate tax. But keep trying, you'll get it eventually (**** head).

    Next, let me address that the SiSoft leak already showed, when clocked to 75% of what the final clocks will be and on early silicon without specific driver support, it could beat the 1080 in most aspects of the test, overall beating the 1080, but losing by 25-30% to the 1080 Ti. That means its performance is already over the 1080. But, your concept is close. They tried to box the pricing, because even if the card ties the Ti, then you have to price the card low enough to attract people back to your brand. That is why the price cuts!

    Further, the Ti has a larger margin built in, which allows for more profit, hence the quick release. Intel has about a 60% margin built in. Quick math puts that at about $680 for that 10-core. AMD will have a 12-core at about $750-900 and a 16-core at $1000-1200. You cannot price the 10-core at cost. So, Intel is in a tight spot and banking on IPC benefits over core count. Considering they are releasing a 6-core mainstream chip and AMD released an 8-core mainstream chip, in a year or two, more programs won't cap the multi-threading benefit at 6C/12T, currently considering above that a workstation, not so much a gaming machine. So, the math says Intel is going to have issues on pricing without significantly hurting, which is why they started selling Fab time, to make up more in another area they are bleeding money and inking the deals they did after Ryzen was released.

    But, moving up the release by that far, effecting TSMCs ability to crank it out and potentially the process used, plus extra costs with the move, to get a better margin product and be on top says they have doubts about the Ti and Titan Xp against Vega. So try again.

    I answered directly. Nvidia announced the show against their show late. They withheld certain info because of that. They leaked performance a week after the Ti release. How much more direct do you need?

    Wait for the analyses on sales to come out, compared with average annual year-over-year data before you can say that considering I already pointed to the leaked benchmarks, which you continue to ignore after I posted them.

    I DID NOT SAY THE VEGA BENCHMARKS WERE PRETTY BAD! I said they beat the 1080 and the card was underclocked from it's base clock. But **** if you pay attention to other than what you want to hear. They did as most card companies do and clock them under what the final release will be so they can test performance without a full release. God you are dumb! They were always scheduled for over 1500 core clock base, plus a boost clock. So you misread and need to go ****ing reread you inept idjut! Further, if you read my comment, the all pascal cards are limited until you get them to 50C. So just **** off, please!

    Now, I'll agree on optimizations, which historically take 3-6 months to reach the their best. Actually, Q3 would be earliest July, and for most companies starts in July/August, depending on fiscal year. Not June. Now, at 3 months and expecting release mid- to late-May, that places mid- to late-August before it really comes into it's own. So later than your timeline, but you are correct in it being at about the earliest Volta shows up. Also, that is false. ZERO BENCHES EXIST FOR VOLTA!!! WE DO NOT HAVE DETAILS ON THE HARDWARE. Do you enjoy making **** up, or do you just bury your head in the sand? The information is there if you look, which you evidently have not done!

    AMD did reveal a lot about the card. If you were not interested in information about server products or VR use, then it was a let down. They detailed their version of Physx, the effect of the HBCC, etc. They showed some use cases, but capped performance at 60 FPS, which prevented a good understanding of the card's ability. Nvidia's trickery on adjusting last minute clocks in firmware literally muted them, which caused exactly the ignorance you are currently spewing. But, what the **** ever.
     
  14. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,213
    Messages:
    39,333
    Likes Received:
    70,628
    Trophy Points:
    931
    What is this? LOL. :vbbiggrin: Is that a real problem someone is having or just a silly animation? (It would not surprise me if someone was experiencing some nonsense like that.)
     
    Donald@Paladin44 likes this.
  15. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Real problem as far as I know

    Sent from my OnePlus 1 using a coconut
     
  16. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,213
    Messages:
    39,333
    Likes Received:
    70,628
    Trophy Points:
    931
    Stardock Fences software with the gate accidentally left open. :D
     
  17. cookies981

    cookies981 Notebook Evangelist

    Reputations:
    55
    Messages:
    421
    Likes Received:
    180
    Trophy Points:
    56
    You're still missing my point, and you're still talking about shareholders and distributions when I'm not (and haven't been) funny for someone who claims to know what they're talking about but resorts to name calling.

    So yup keep trying, you'll get it eventually.
    The leaked benchmarks do not prove anything except compute performance. Compute performance doesn't always match up witb in game performance.

    You didn't have to say the vega benchmarks were pretty bad, it's pretty ****ing obvious what you meant. The original leaked benchmarks put them right around the 1080 at whatever clocks they chose to leak, and slower than the 1080 Ti.

    It wasn't until the clocks were increased that it showed potential, which just proves my point. And yet again COMPUTE performance. The leaked gaming performance of Vega (using the two games that were showed) put it right about on par with the regular ole 1080. Which would make it significantly lower than the 1080Ti. But unfinished hardware, unoptimized game, ****ty drivers all would have played into that and the final result would likely be better. How much better is an unknown.

    So yeah the leaked benchmarks so far have been nothing noteworthy and nothing to shout about.

    And once again the clock speed for vega has not been officially announced, the only thing that AMD have said is that it SHOULD get 25 TFLOPS which would give it a base clock of around 1500. As far as I'm aware there have been no leaked benchmarks at that clock rate. In theory it if AMD wasn't exaggerating then it should be around that but we'll see.
    I didn't say benches, I said information. Nvidia have been talking about Volta and its expected performance increase for well over a year now, take that with a grain of salt and subtract the usual NVidia inflated gains and you can figure out a rough performance increase from Maxwell and pascal. It could very well be more or less from that but its a start. It's also NVidia who have delivered cards that work, AMD has delivered nothing but garbage for the past few years and have been pretty silent on vega.

    So far the gaming performance for vega has been around the 1080. Known information, I'm fully aware there's many things that will affect that. So yeah it's a pretty safe bet that if you're waiting for volta you're not going to be disappointed if you were coming from a 980 or even from a 1080 unless Nvidia goes ahead and ****s up like they did with the 880m vs 780m.

    They didn't reveal anything that people expected them to reveal. Literally every single person agrees that AMD reveal was a gigantic let down compared to what they were hyping and what NVidia was obviously going to reveal. But hey apparently everyone misread it, except for you.

    For someone who wants to call others fanboys you sure are taking this super personally. I wonder why....don't worry I promise not to laugh too hard when you are disappointed (again). But keep bringing up server performance which nobody here (except for you) is talking about.
     
    Last edited: Apr 18, 2017
  18. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    @cookies981 - Considering the game performance matched the 1080, beats it in compute, and all of that at 75% or so of the final speed and no driver support when it was taken, yeah, huge flop there!

    As to the show, I already explained which is also why they had people stumbling over features allowed to be discussed and certain aspects of performance.

    Either way, I believe we have reached the end of any information being added. You evidently take more in a graph you even admit to be faulty on Nvidia's historic performance statements over discussions of architecture, implementation of features and new hardware, etc. So, at that point, we hit the end of logic.

    Also, considering you cannot see how business decisions are effected by all of this, which is the basis of the discussion on why you move the release in this manner, we are done here!
     
  19. cookies981

    cookies981 Notebook Evangelist

    Reputations:
    55
    Messages:
    421
    Likes Received:
    180
    Trophy Points:
    56
    Once again final EXPECTED speed. AMD have not stated what the base clocks are going to be, and what the boost clocks are going to be. They have just given an expected TFLOPS value which would give 1500mhz clocks based on the hardware. But is that base? Is that boosted? They haven't said that and there have been no leaked benchmarks for it.
    Funny because if you were paying attention instead of talking about shares outstanding, dividends, floats etc you'd have realized that is what I've been getting at.

    Nvidia and Intel making their cards/chips cheaper is to hurt AMD more. If they stuck with their original pricing, AMD wins in what they've largely always won at...price to performance ratio. By dropping the prices it makes it significantly harder for AMD to compete especially when they do not have the capital of their competitors. If Vega ends up being identical to the 1070/1080 or the 1080 Ti now AMD have to price their cards lower to get people to switch over. If Vega ends up being worse, AMD once again have to price their cards lower to get people to switch over. The only way AMD wins is if they have a card that's better, AND cheaper. In every other consumer/mainstream case Nvidia/Intel comes out ahead, though in a few cases that may take a couple of months for the results to show.
     
  20. Hd172

    Hd172 Notebook Consultant

    Reputations:
    56
    Messages:
    166
    Likes Received:
    169
    Trophy Points:
    56
    So I just downloaded cinebench R15. what kind of temps should I expect at 5Ghz? I use my computer in my car mostly with AC on full blast I have laptop cooler that just lifts the computer off my lap and elevates the rear with light air.
     
  21. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    this is where thing gets deep & interesting. since ryzen came out ive been reading a bunch of crap but to summarize, nvidia's GPU aside from from gtx 480(has it's own hardware schduler, which is why it runs hot) are taking advantage of intel's cpu via its driver to deliver performance, main reason is likely most games (not including latest triple A ones) are still on DX11, while AMD's hardware has its own hardware scheduler.

    normally a hardware thats powerful & with it's hardware scheduler it should technically be better than nvidia's GPU design. issue is, software are not taking advantage of it's design for AMD's both CPU and GPU (gpu hardware scheduler)(ie games etc) and heres where it becomes AMD's biggest concern. they dont have the cash to pay developer or game designer to code it the way they wanted which is an big issue they are trying to resolve.

    the benchmark is one thing, but details is another. for example, many videos show intel 7700k and 6700k running at 99 to 100% cpu usage of all time, while AMD's cpu running only at 30-40%, clearly optimized for intel's cpu.

    these type of things take time and hope AMD has enough cash to have developer to finally take advantage of their CPU, tbh i can't stand intel's milking scheme and years and years of quad core with little to no IPC improvement, its simply bullcrap, though I do give them credit for having the best silicon quality and thats also because they were in the lead and have a ton of money over AMD.

    ryzen also got their own share of issue like core complex etc, assuming ryzen2 in 2019 does fix all these problem along with software being optimized for scaleability of multiple core complex design, it will top intel in many ways.
     
    Last edited: Apr 19, 2017
    ajc9988 likes this.
  22. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Ryzen 2 is 2018 and likely a second 14nm design. GloFo transitions to 7nm in 2018 2H, with volume production in 2019. If you look at the Wafer Supply Agreement modified at the end of 2016, it modified terms up to 2020, including wafers for 7nm. This shrink is a full node and this tech is not licensed from Samsung like 14nm. It utilizes IBM's process, who was the first to build a 7nm chip in 2015. So, it is likely by Zen 3 in 2019 to have a 7nm design going against Intel's 10nm++ process (Intel's first process on 10nm to beat Coffee Lake's 14nm++ process). Intel only compares its 10nm process to other 10nm processes, but TSMC is starting 7nm this summer and GloFo next year. Both have it set up to potentially integrate EUV lithography once available. TSMC is talking about 5nm by 2019 or 2020 (I forgot which), with GloFo and Samsung following shortly. So, when Intel goes to 7nm, the rest will have done it for years and be almost ready for a switch to 5nm. So, even if Intel has an advantage of logic density per node, it may be erased by being on a larger node moving forward.

    Same goes with Nvidia. It plans for a 12nm half node for Volta (which isn't likely for the pushed up cards), while AMD uses 14nm for Vega, then 7nm for Navi. So, it will be interesting, but don't expect your cards to be good past 2020, as both GPU companies will be on 7nm, most likely, with an eye to 5nm.

    Sent from my SM-G900P using Tapatalk
     
    ole!!! likes this.
  23. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    you know what i find interesting is that, because intel was so much ahead, glofo and TSMC had to skip nodes in order to catch up otherwise they won't be able to compete, and lets face it, intel is clearly leading in this industry even with their skip. we can clearly see the quality of silicon being produced here, glofo used in ryzen clock to max 4.1-4.2ghz on most chips (regardless of 8c/6c/4c) while intel's kabylake silicon can do 4.9-5ghz over half the chips they sell, though it did take them 2 extra years to refine 14nm so dont really know the truth.

    in business, glofo and TMSC had to make it look nice in public to attract attention fo their stocks so they can finally say "we are ahead of intel as they are on 10nm and we're on 7nm" but people that really knows quality will prob still stick to intel.

    another thing that plays big part in semiconductor nowadays is scaleability which AMD is the one thats ahead currently with their latest core complex design and upcoming vega GPU, my only concern for vega is that if there is latency issue in vega with this same design as they do in ryzen, which ryzen performance is too much dependent on memory bandwidth. while nvidia and intel are facing this same issue but amd took the lead, its good to finally see some changes.
     
  24. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    False. 10nm gave everyone troubles. This is why they decided to skip it. GloFo directly stated they were skipping, which made sense due to 10nm yields issues and IBM paying them to take their fab, their IP, and their engineers, which had already built on 7nm. This made Intel the 4th largest Fab, behind Samsung, TSMC, and GloFo. Because of this (edit: and GloFo announcing they were abandoning 10nm for 7nm), TSMC pushed to get to 7nm first. Stop giving credit to Intel when they are not the driver in all cases!

    Further, TSMC's first 7nm is around the density of cannonlake on logic and it's 7nm+ will beat Intel's density on logic. If GloFo is anywhere close to that, Intel is ****ed! And considering IBM power chips are extremely competitive with Intel's offerings at a given node, Intel should be worried.

    Also, for Vega, they divorced building the graphics processor around the CPU design. This means it is the first GPU in over 5 years not built around APU designs. So don't worry, completely different design teams!

    Just stop saying Intel is ahead. They are heading for a fight! You mistake the reasons for 7nm entirely!



    Sent from my SM-G900P using Tapatalk
     
    Last edited: Apr 19, 2017
    Ionising_Radiation likes this.
  25. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    honestly though, i dont buy a lot of their crap. if they give all these trouble yet intel went for it, doesnt make a whole lot of sense. it made more sense for business to try to catch up and present it to the public and show how advance and ahead of a company is. when they mention trouble, notice how they mention little details.

    give a full comparison, glofo is cheaper for AMD, however its the first 14nm for them. where as intel had 14nm since broadwell so its literally 2.5 yrs into 14nm and we are going into 3.5 when coffeelake comes out, that is gonna get optimized as fk.

    also, intel maybe fked from business point of view, but thats none of my concern. if their 10nm icelake cpu can run much cooler and more power efficient than 14nm kaby while capable of clocking just as well, i could careless one bit and i'll grab a CPU at least 6c+ right away.
     
  26. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Actually, coffee lake uses 14nm++, which will be better than any 10nm until the 10nm++ process. That means Kaby is **** compared to the chip they release at the end of the year (with rumors Intel may move up coffee lake release even further). Intel is scared AF! So don't just buy on name and OC, but on need and purpose. If you bought Kaby, you got fleeced. Truth!

    Sent from my SM-G900P using Tapatalk
     
  27. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Papusan likes this.
  28. Johnksss

    Johnksss .

    Reputations:
    11,531
    Messages:
    19,457
    Likes Received:
    12,828
    Trophy Points:
    931
    Well now that we are totally offtopic! I sure do apologize for going in the wrong direction!!!!!!!!!!!!!!!!!!!!!!

    Can we get back to over clocking now?
     
  29. bloodhawk

    bloodhawk Derailer of threads.

    Reputations:
    2,967
    Messages:
    5,851
    Likes Received:
    8,565
    Trophy Points:
    681
    [​IMG]
     
    D2 Ultima, Papusan, ssj92 and 2 others like this.
  30. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    i think its them trying to make extra money from 14nm, either too much production and too much at hand need to rid of them, not sure. 10nm might be junk just like 14nm broadwell, crappy silicon, so 10nm++ might be the one to go for, again its not out so we don't know yet.
     
  31. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    No, coffee is a new process and gives the largest jump in performance recently. 10nm++ will not be here for years. So, because of a new process, this isn't just getting rid of the old chips. But, it does show Intel willing to screw over Kaby owners because the want more money instead of cutting prices (back to my discussion yesterday).

    Sent from my SM-G900P using Tapatalk
     
    Papusan likes this.
  32. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    coffeelake won't be a new process, its another optimization for desktop and high end mobile on 14nm, sucks we gonna stuck on 14nm for 3.5 to 4yrs straight.
     
  33. cookies981

    cookies981 Notebook Evangelist

    Reputations:
    55
    Messages:
    421
    Likes Received:
    180
    Trophy Points:
    56
    Um I'm not sure what that article is talking about at all because...
    https://twitter.com/intelnews/status/829773829471744000

    That was back in early feb at Intels investor meeting, so why is that hexus article acting like this was brand new information that came out yesterday lol? We've known it for 2+ months now.

    This also lines up with Intels previous release schedule.

    Broadwell: September 2014
    Skylake: August 2015
    Kaby lake: October 2016
    Coffee lake: 2h 2017 ~ August 2017
     
    Last edited: Apr 19, 2017
    ajc9988 likes this.
  34. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Coffee lake was once expected Q1 2018 for the "s" series, but that was moved to Q4 2017 expected because of the 2H you mentioned considering the release of Kaby Q1 2017, as anything sooner cannibalizes Kaby sells. This rumor says "S" series desktop in August, which is K overclocking chips, filling in the lower chips later. So I agree with you, in part.

    It is the moving up of the S series before the H that is of note, turning the release schedule you just said on its head.

    Sent from my SM-G900P using Tapatalk
     
  35. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Look it up, Intel has 3 processes on the 14nm node. This was disclosed at an event sometime in the past month.

    Sent from my SM-G900P using Tapatalk
     
  36. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,431
    Messages:
    58,188
    Likes Received:
    17,895
    Trophy Points:
    931
    I've started my dremel work on my stand, got one hole done, will have to wait a little on the other, it's pretty thick steel and it took a while.

    I have also pre-ordered one of those bitspower caps out of interest.
     
  37. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    yep, and going to be 4, coffee is the 4th which sucks.
     
    Papusan likes this.
  38. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Actually, that is generations, not process. So, generations are Broadwell, Skylake, Kaby, and now Coffee. As for processes, Broadwell and Skylake used 14nm process, Kaby, Kaby-X, and Skylake-X were all using the second process (14nm+). Coffee will be the 4th generation, but will use the 14nm++ process. To show performance:
    [​IMG]

    Notice the jump between + and ++ on 14nm, including it beating the 10nm node, which is cannonlake. Now, if they made Skylake-X on 14nm++, no one would buy cannonlake as it is a step down on transistor performance (although it will have increased density). So, it may be slightly better than the move from Skylake to Kaby. But in this report, Intel only compared themselves to 10nm when the majority of fabs are jumping straight to 7nm, negating the argument on their 10nm process being the best (the change coming for reasons of competition at fabs larger than Intel). Here is an articles about this event in March:
    https://www.extremetech.com/computi...0nm-process-wants-change-define-process-nodes

    Now, the rumor should be contrasted with the prevailing theory to date and the rumor published two days ago:
    https://www.fool.com/investing/2017/04/17/report-intel-corporation-preps-six-core-coffee-lak.aspx
     
    Ashtrix and Papusan like this.
  39. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,431
    Messages:
    58,188
    Likes Received:
    17,895
    Trophy Points:
    931
    It's all going to slow down unless something significant changes certainly. Fighting against physical limits is always hard.
     
    ajc9988 likes this.
  40. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    its all how people see them. the way i see them though is that back in the days we had first gen i7 and sandy both are 32nm which lasted one year, ivy first 22nm lasted 1yr. the old tick-tock became tick-tock-refresh which intel changed it to tick-tock-optimization. however they went further and right out ignore the rule the set and became tick-tock-optimization-2nd optimization.

    i wonder when 10nm hits that might change to tick-tock-optimization x3 lmao


    woah http://wccftech.com/intel-unveil-basin-falls-coffee-lake-skylake-x-asml-ryzen-16-core/
    if rumors true then we will see coffeelake with z370 this yr in august? lmao 6c variants
     
    Last edited: Apr 19, 2017
    ajc9988 likes this.
  41. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,839
    Likes Received:
    59,614
    Trophy Points:
    931
    I put it here Coffee Lake: cheaper 6-coreer possibly already in August - Notebookcheck.com

    Translated from German :eek:
    [​IMG]

    Intel to Unveil Basin Falls, launch Coffee Lake ahead of schedule - digitimes.com

    "Intel will unveil its Basin Falls platform, i.e. Skylake-X, Kaby Lake-X processors and X299 chipset, at Computex 2017 in Taipei during May 30-June 3 two months earlier than originally scheduled, and will bring forward the launch of Coffee Lake microarchitecture based on a 14nm process node from January 2018 originally to August 2017, to cope with increasing competition from AMD's Ryzen 7 and Ryzen 5 processors, according to Taiwan-based PC vendors."
     
    Jon Webb and ajc9988 like this.
  42. Johnksss

    Johnksss .

    Reputations:
    11,531
    Messages:
    19,457
    Likes Received:
    12,828
    Trophy Points:
    931
    So for once Intel decides to release early instead of always releasing late? What's the big deal with that?
     
  43. leftsenseless

    leftsenseless Notebook Evangelist

    Reputations:
    426
    Messages:
    392
    Likes Received:
    664
    Trophy Points:
    106
    Got to keep AMD and Ryzen at bay. Intel has proven that they only want to move when they have to. Without proper competition they've allowed their procs to stagnate. Now, they want to keep AMD from gaining any share so they are ready to move.
     
  44. Johnksss

    Johnksss .

    Reputations:
    11,531
    Messages:
    19,457
    Likes Received:
    12,828
    Trophy Points:
    931
    No they don't. That is just public opinion. Intel will not go broke if AMD gains back some of what it lost over the last 10 years.
     
    Jon Webb and Mr. Fox like this.
  45. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I do agree. It's up to material scientists to deliver in time to get 5nm and below. TSMC feels relatively confident on 5nm in or around 2020. GloFo will follow, same with samsung. Intel is now taking the older approach--develop the nodes with new and better processes before shrinking. But 5-3nm seem to be where the wall is (although, it should be noted the first three molecule transister has been made). As you said though, we are reaching the limits on this tech (but there is more waiting in the wings, like nano-vacuum tubes)...

    Well, the size of transistor is the node. It is modified by the type (finFET, SoI, etc.). Then you have the process the company used, which is split into versions or whatever the company decides to call the differences. CPU generations refers to changes in architectural design of the chip. That is how they stack transistors and locate components relative to each other on the silicon. Processes were talked about a lot more before the race to miniaturization. Then it became the node that was the talking point. Etc.

    It is the amount of time and the reasons for the change. How it matters to us is that Intel will use the same socket, but there is no word on backward compatibility with the older chipsets. Also, for those that cannot keep updating their hardware so often, it can tell them which hardware to get and sit back on with some popcorn while seeing what happens. Now, being able to have a 6-core mainstream with a nice jump (wait for release and benches obviously), Coffee may be a good time to sit and wait until 2019 for better IPC (although you will get more density with the earlier 10nm nodes from Intel). AMD offers a good value alternative, but it is only recommended if your uses align with its strengths.
     
    Ashtrix, Jon Webb and Johnksss like this.
  46. Johnksss

    Johnksss .

    Reputations:
    11,531
    Messages:
    19,457
    Likes Received:
    12,828
    Trophy Points:
    931
    Let me put it this way. This matters to about 4 percent of the world. The rest of the world could careless. That is the reality of the matter. :)
     
    Jon Webb and ajc9988 like this.
  47. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    No one is saying they will go broke, but the amount of revenue effects executive pay for the year (the incentive to compete and change). Also, by going the margin route rather than price wars means they want to limit how much cash they lose and market share. By pushing up products, the amount of price cuts is limited, thereby having better maintenance on margins per chip. This keeps revenues higher, which helps with bonuses and distributions, keeping the company's market float (how much the company is worth on the basis of share cost multiplied by the shares issued and sold in the secondary market) higher. This is to balance the war costs against AMD between lost margins and market share (which is the proper way to try to balance goals, in my opinion).
     
  48. leftsenseless

    leftsenseless Notebook Evangelist

    Reputations:
    426
    Messages:
    392
    Likes Received:
    664
    Trophy Points:
    106
    I agree that they won't go broke at all. They dominate the industry. Yet, Intel has been fined in the past for antitrust practices. They have other means to maintain their market share. I'm sure that Intel doesn't want to live in a shared market environment. This seems like their response to Ryzen. As has been stated, Intel usually takes their time to release products that aren't leaps and bounds above the last iteration. Now they are. You wondered why yourself.
     
  49. Johnksss

    Johnksss .

    Reputations:
    11,531
    Messages:
    19,457
    Likes Received:
    12,828
    Trophy Points:
    931
    You're preaching to the church choir my friend.... This same information gets posted in like 50 threads. :)
     
    Ionising_Radiation and ajc9988 like this.
  50. Johnksss

    Johnksss .

    Reputations:
    11,531
    Messages:
    19,457
    Likes Received:
    12,828
    Trophy Points:
    931
    No I don't. I don't care. :) If i had to listen to this every day at work I would go crazy!!!!!!!!!!!!!
     
    Papusan and Mr. Fox like this.
← Previous pageNext page →