I do agree! And having the 16C clocked at 4.0GHz boost! For comparison, Intel' Xeon 2698v3 had a base of 2.3 and a boost of 3.6. The 2683v4 had a base of 2.1, boost of 3.0GHz. Neither could overclock, meaning on all cores, neither could do 3.6, which is the base of the 16 core for AMD. We'll see if that changes this round. I don't think Intel planned more than a 12 core tops for the HEDT segment. So, if they do create one, we'll see what it's clocks are...
-
-
You can get Haswell-E to stay at max Turbo by booting up with no (new) microcode. The max is ~3.6G for 12C+ though.
alexhawker and ajc9988 like this. -
lctalley0109 Notebook Evangelist
Here is what I have found so far for stable clocks. All clocks were done with Prime95 overnight with blend test:
Temps may not be accurate due to Ryzen saying they were off and my office varies from about 70F to 74F. Ram is just XMP 2666 (16,18,18,35)
4.0 @ 1.375 - 84C - Cinebench R15 - 1680
3.925 @ 1.325 - 71C - Cinebench R15 - 1640
3.8 @ 1.25 - 67C - Cenbench R15 - 1582
Stock - Not really much testing but Cinebench R15 - 1570
To bad my board does not appear to have VRM sensors or at least HWInfo64 is not picking them up. Would like to see those temps.Last edited: Mar 30, 2017 -
AOTS Gets AMD Ryzen CPU Optimizations - Benchmarks
"The team behind Ashes has been able to tweak processor performance in such a manner that we gained roughly 36% performance just from that CPU optimization." -
tilleroftheearth Wisdom listens quietly...
This is like what I experienced the last time I bought a $2K AMD platform many, many years ago; incompatibility with my then current programs and O/S.
This isn't 'harsh' or 'misleading'. Just stating facts (yeah; I actually read the article...).
Of course they'll fix these issues, but if my workflow was affected by this issue and I had jumped on AMD/Ryzen blindly... any performance improvements over my 'old' Intel platforms would have fizzled into thin air in only a matter of hours or a few days (downtime)... This is the very reason that I said I'll revisit Ryzen in a few years.
Compatibility, reliability, longevity and dependability is king for a computing platform. Much more so than nominal 'performance' might initially indicate.
Productivity isn't how fast I can produce work for a few seconds (i.e. world's best overclock...) it is how much work 'done' I can consistently produce over the course of ownership of the entire platform. A few seconds/minutes faster with relatively long periods of down time is not conducive to 'sustained performance/productivity over time'.
In the overall scheme of things, this is a minor bug for AMD to fix (I hope!). But to think that this is a very specific use case where this 'bug' shows up is a little short sighted, ime.
-
I think the bug is a non started. one of those uncaught flukes. that it will be fixed though is a definite plus.
As far as optimizations, well M$ may win again if they are only incorperated in DX12 versions. Hopefully the optimizations move forward to older engines as well. -
"“Every processor is different on how you tune it, and Ryzen gave us some new data points on optimization,” Oxide’s Dan Baker told PCWorld. “We’ve invested thousands of hours tuning Intel CPUs to get every last bit of performance out of them, but comparatively little time so far on Ryzen.”"
http://www.pcworld.com/article/3185...zen-can-benefit-from-optimized-game-code.html -
But, I do agree as far as early adopters. Now, what should be remembered is Zen will be used for years to come, with improvements. This means after this time, and with larger market share coming, we shouldn't have as much lag for optimizing.
Now we know the HEDT will have 16C/32T, quad channel, and more PCIe lanes (meaning tri or quad gpu support not limited by lanes, drivers and programs yet to be seen) while running 3.6/4.0 boost. That may be able to take Intel to task at certain tasks if their HEDT only has 12C/24T, even with better support and higher IPC. So, price and performance may not be out of your expectations for a $2K rig.
Now, what I would recommend is wait for Intel's release in August. With the unveil of thread ripper at computex and likely sale in late June to July, by August and September, you'll have a clear understanding of what benefits are available at what price for your workloads. This will tell you whether your software will support the extra threads and what will suit you. With the two CPU boards, you may even consider a 2P board and throw 2 16 core chips in (little more expensive on the build, but if your software can handle it, what is paying $2500 for 32 cores running 3.6/4.0 (costs before GPU and Ram)). You don't get 8 channel memory, like with Naples, but you get a higher clock speed.
My point is, your options have increased. As I've said before, I don't know your specific workloads. But seeing general performance and what is expected for both sides, more custom tailored rigs will be possible for your work loads (edit: not saying which will be better for your workloads yet). So don't write it off yet, but also wait for bugs to be addressed and to see this year's competition. Between now and then (5 months, Ryzen 1800x will have been out 6 months), I bet a lot of productivity software will be able to be optimized!
Sent from my SM-G900P using TapatalkLast edited: Mar 30, 2017Papusan and tilleroftheearth like this. -
lctalley0109 Notebook Evangelist
1800X
Asrock Fatl1ty x370 Professional Gaming AM4 Motherboard
Swiftech H240X2 Prestige AIO Liquid CPU Cooler
Swiftech AM4 Mounting Kit
You probably don't need all that but was just letting you know.Rage Set likes this. -
tilleroftheearth Wisdom listens quietly...
Yeah, those seconds do add up over time to something tangible (just make sure they don't stop adding...).
I too used to think that my latest $18K box would last for 'years to come' - now, I know better. Even the just discussed Coffee Lake, 8th gen processors while still at 14nm will still be impressive enough to jump to for me (considering ~20% improvement over Skylake...). And that's just a few months from now. Today's Ryzen, even with all it's optimizations by then, will have been effectively surpassed on what some are calling 'old tech' (see link below).
See:
http://forum.notebookreview.com/threads/intel-teases-mystery-8th-gen-processors-–-and-confuses-everybody.803111/
I'm not waiting for August or any specific time period... upgrade time is not on my arbitrary schedule; it is when products are actually available to be tested and confirmed working in my production environment.
Right now, my options haven't increased one bit. Ryzen is still too immature for me to consider. Intel still hasn't shown me something better that I can buy right now. And, there is no one else to play/do business with.
But what has increased is my expectation of the possible productivity increases I'll be able to achieve after the next 12 to 18 months or so.
Intel isn't written off by me yet - not by a long shot; XPoint/Optane (v2/v3 or later) is where the next big shift in computing is/will happen. AMD will be able to use it too, of course - but 'compatible' to me is not something I like to settle for - especially as the 'core' of my platform's heart and soul.
My workloads/workflows have proven themselves to not follow what a single BM (or even a lot of them) might indicate; that is why I complete a full cycle (or dozen) on my daily/weekly workflows when I test new hardware/components/platforms and software/programs/drivers too. My workflows are pretty consistent and constant - seeing if the same amount of work can be done with new hardware/software is an easy thing for me to see (or not). The point being that I don't care what others (on line mags...) report with regards to their performance stat's.
Sure, I'll read their stories - I love to read - but no buying decision is ever made (by me) by looking at graphs and bm 'scores'. I'll still run my own testing procedure (i.e. try to make money with the system I'm considering...) before giving real $$$$ for anything.
And as was seen with the AMD optimizations; not just AMD platforms are helped by those tweaks... Intel platforms also saw a performance increase too (just smaller - they were already ahead...).
See:
http://www.tomshardware.com/news/amd-ryzen-game-optimization-aots-escalation,34021.html
Last edited: Mar 30, 2017 -
Sent from my SM-G900P using TapatalkRaiderman and tilleroftheearth like this. -
tilleroftheearth Wisdom listens quietly...
I don't really care what it is 'classed' as (all is just conjecture at this point anyways) - productivity gained is what the end goal is.
i.e. I didn't drive fast only when I had a 'sports' car... sometimes, the old minivan is good for some tail out moments too.
We'll all (actually) cross that bridge when we get to it... - right now, I want to see XPoint v2 already...
I like that 'something for everyone, and more to come'! I can live with that.
ajc9988 likes this. -
Let me ask the guys in this thread for their honest opinion.
Would you buy an X99 rig for a good price, that comes with 1080 SLI, a really good mobo, and a 6850K, but you know X99 is done or buy into the AM4 (or X390)? The price is 2600 for the X99, with AM4 costing a little less. Obliviously, I will get future upgrades with the AMD platform(s) but the X99, again, is a really good setup. What would you do?
EDIT: This is for my personal gaming rig. I will do some work related activities on it, but for the most part it is solely for gaming and some video projects here and there.Last edited: Mar 30, 2017 -
-
Edit: I flipped my answers...
Sent from my SM-G900P using Tapatalktriturbo, tilleroftheearth, Papusan and 1 other person like this. -
tilleroftheearth Wisdom listens quietly...
Note: I know nothing about gaming... so please adjust the following response accordingly (depending on the games you play and their processor/platform preference). What I'm responding to is the 'work related' part to the question and the assumed obsolescence of the x99 setup.
See:
http://www.cpubenchmark.net/compare.php?cmp[]=2800&cmp[]=2966
Right now, there is an ~11% single core (raw) performance difference that leans towards the Intel setup. That is like getting the next gen/iteration of the platform you're considering for (to me; that makes it fairly equal to the AMD Ryzen 7 upgrades possible...). What about that ~6% advantage in multicore performance for AMD? When considered that 2 extra cores are responsible for that small benefit, I see that as an inefficient design, not a positive. Sure, 140W TDP vs. 95W TDP is also something to think about - but keep in mind how often you would keep either platform pegged at 100% for hours at a time.
Another thing to consider; with such a high performance system, you're in the top 1% of consumer/prosumer computing platforms. Even if something comes out at double the performance next year; you won't lose any performance from continuing to use the setup you're now considering (and of course, if something came out at double the performance; that is when you start calculating the cost to you by staying on the older (free) platform or spending real $$$$ and getting the latest).
In that link above, there is a ~$70 difference with the Intel being more expensive (is this in the ballpark of the price difference you're seeing?). If you keep this system at least 18 months or more, that very small price difference is effectively insignificant - especially if you're able to use the firepower of this setup days, weeks or months earlier than a competitive AMD setup.
Another point to consider is XPoint support (100%, not just compatibility...). Not only with the AMD setup, but also with the Intel platform we're considering here. This is where I see most responsiveness/productivity/performance will be gained in the next 18 months or so... if you expand your options to the higher end Kaby Lake and newer processors; that is where I would be putting my $$$$$ today. Especially if your current setup is 'good enough' or better and allows you to wait (as long as possible) for a true XPoint powered platform.
Right now? My vote for this particular instance: Intel. If you need to buy 'now'. Time is money and money is always worth much less than time... in the end.
Hope this helps.
Rage Set likes this. -
tilleroftheearth Wisdom listens quietly...
Yeah, I'm waiting. But not for your reasons below.
GPU's have very little to do with the 'raw' performance my workflows require.
If I saved $500 per workstation - it isn't enough with all the known compromises now, nor with all the expected benefits in the near future - and from what I've seen? The savings won't be in that $500 range for a completely decked out platform...
What I'm waiting for is Optane DIMM's - that is where my vote is going... the platforms we can realize today will seem like smartphones instead of workstations (mobile or desktops).
-
Sent from my SM-G900P using Tapatalktriturbo likes this. -
tilleroftheearth Wisdom listens quietly...
The benefit is huge when viewed within a complete system; Relatively small capacity DIMM's accessing high latency SSD's (PCIe or otherwise) is a huge bottleneck.
High capacity Optane RAM (though slower than DIMM's) will still trounce any PCIe access to any current storage device...
Even better? All current storage devices are half duplex... Optane RAM and SSD's are full duplex. Welcome to 2020.
See:
http://www.storagereview.com/intel_optane_ssd_dc_p4800x_enterprise_ssd_launched
-
Sent from my SM-G900P using Tapatalk -
If you do need more than 1TB for hot data the capacity/cost ratio with DDR3 will drop quickly.Last edited: Mar 31, 2017 -
tilleroftheearth Wisdom listens quietly...
RAID increases latency on almost any current drive available - even as it increases total throughput (mostly sequential).
The proof that what you're suggesting is not optimal is that XPoint is here.
Latency is not to be underestimated - even at the nano second level. CPU's live far above that already... (and in the end; CPU + RAM is where all work is done, still).
See:
http://hothardware.com/reviews/inte...ing-3d-xpoint-memory-technology-debuts?page=2
Also; PCIe lanes are grossly limited when compared to DIMM's...
We are just beginning to get the use of DIMM's just now though... (for something other than RAM)
-
tilleroftheearth Wisdom listens quietly...
Those options look good, initially.
Where it fails on providing the benefits and promises of fast RAM is the (s-l-o-w) drivers needed to keep that much data 'live'. Not to mention backed up to slower storage media and the added expense and time requirements of a stable and always functioning UPS setup (especially for the RAM - not just the system itself).
A system with a quarter or a half a TB of RAM would make me happy. But not one with slow/ancient cores. Nor one which relies on fleebay to acquire the parts for either.
Seeing a 1960's muscle car fly past at it's performance peak is impressive - in the '60's. Not something I would sink my money into to see it almost 70 years later... and what you're suggesting to me is almost as many decades apart (tech wise).
Might be an option for some, but I'll pass.
-
Sorry if posted before
AMD Fixes More Ryzen Issues with New BIOS Firmware Microcode - Guru3d.com
"Performance keeps on improving in games with Ryzen, that would be the generic message AMD is evangelizing in a new Blog post." -
tilleroftheearth Wisdom listens quietly...
Papusan, honest question:
What does this do to actual game play (advantage) by going from ~60 to ~90 FPS... minimum?
Will this be effectively a non-advantage for players whose monitors are 60Hz or less?
Players with monitors that can refresh faster than 60Hz... will they see any real advantage in game play - or will this just be eye candy?
Great job on AMD for getting these improvements so quickly, either way.
Papusan likes this. -
For my uses, optane is overpriced and doesn't give enough benefit compared to alternatives!
Sent from my SM-G900P using TapatalkPapusan likes this. -
Robbo99999 Notebook Prophet
Last edited: Mar 31, 2017hmscott, ajc9988, tilleroftheearth and 1 other person like this. -
I can't see the big difference between ok and very good screen calibration. But 60Hz panels is yesterday. And new games coming and will push hardware even further. High performance will not be less important when your tech begin to be older. Bigger the better said....
Raiderman and tilleroftheearth like this. -
tilleroftheearth Wisdom listens quietly...
You can see the graph in post #1121 above?
8x to 40x lower read latency than an Intel DC P3700... PCIe v3.0 8x... while consecutively doing a random write on the drive (at ~10x what any other SSD can currently do, btw).
RAID0 with anything less will be laughable in a sustained, over time, workflow.
To be clear; Optane as it is today isn't beneficial to me either; as-is.
But if I was betting; Optane/XPoint is clearly going to be huge. Huge. Even over significantly more cores (AMD) or almost any other near term tech leap that I know about today.
Last edited: Apr 2, 2017ajc9988 likes this. -
https://www.pcper.com/reviews/Stora...e-RAID-Tested-Why-So-Snappy/Latency-Distribut
Sent from my SM-G900P using Tapatalk
Edit: it isn't more cores, it is the lowering of latency and increased pcie that gives the benefit. The core count and use will depend on the person. But going to three nvme will give you better than two optane in raid, although because less bottleneck, it will have a bit more latency, but move at faster speeds. So, there is more nuance here than just pushing optane. Generally, I support the tech as I support phase change memory, but it is way too early to sing it's praises over new setups that will be possible. That is what I'm trying to get at...Last edited: Mar 31, 2017hmscott likes this. -
New tech will come, but not tomorrow. And Optane/XPoint is still too young.
Jedec DDR5 & NVDIMM-P Standards Under Development -
tilleroftheearth Wisdom listens quietly...
QD=16? Enough said.
We're talking workstation class workloads, 1 to 4 QD... maybe getting to 8 QD occasionally.
RAID is less responsive even when it's faster on the 'top end'.
Those scores still don't compete with the DC P3700 btw, on a sustained, over time, workflow.
The 'more cores' comment was about Optane vs. AMD's Ryzen. Optane at this point suggests much more performance potential than anything either Intel or Ryzen can offer from a pure CPU aspect.
See:
http://www.gamersnexus.net/news-pc/2845-intel-optane-dc-p4800x-ssd
There is no CPU option available that will give that kind of increase over existing CPU's.
Papusan likes this. -
tilleroftheearth Wisdom listens quietly...
Already more mature than the 2018 DDR5 is.
These are not comparable; XPoint is much higher capacity (1TB RAM on a notebook, anyone?), but slower and persistent RAM. DDR5 is just faster volatile RAM (what we already have).
These would complement each other nicely.
Papusan likes this. -
Intel's Cannon Lake PC chip shipments may slip into next year
"If you were expecting to buy laptops with Intel’s next-generation Core chips—code-named Cannon Lake—by the end of this year, you may be disappointed."
"There’s a chance that shipments of Cannon Lake—Intel’s first on the 10-nanometer production process—may slip into next year."
"So don’t expect Cannon Lake laptops during this year’s holiday season. Instead, users will be able to get PCs with 8th Generation Core processors, which are made on the 14-nm process. PCs now are available with 7th Generation Core processors code-named Kaby Lake."
"Those 8th Generation Core laptops may be more attractive to customers. The first 10-nm Cannon Lake chips will be slower than 14-nm 8th Generation Core processors. Intel acknowledged the speeds during the manufacturing event, with a chart showing 10-nm chips catching up with 14-nm chip performance in one to two years."
"The first Cannon Lake chips will be targeted at low-power laptops Jokebooks and 2-in-1s filth. PC makers typically need time to test the chips in laptops, so availability of the chips in mainstream PCs may drag into 2018."
Last edited: Mar 31, 2017Raiderman, Robbo99999, hmscott and 1 other person like this. -
Edit: even worse, yields are still so bad, they are having to reduce the number of transistors on top of clocking them slower because of the heat! Yet, people still say they are so great! LMFAO!
"But the first low-power Cannon Lake chips will have fewer transistors, and won’t be comparable to the mature 14-nm chips with more transistors."Last edited: Mar 31, 2017 -
tilleroftheearth Wisdom listens quietly...
Changing an expected launch date by a few weeks after initially announcing it just two months ago is not a reason to state Intel is having problems moving to 10nm.
For whatever reason, they have chosen to release the low power parts first; that is why they have less transistors and lower peak clocks too. This is business decision which I'm sure Intel has not taken lightly. They are shifting gears and their focus (we need to try to keep up...).
Mature 14nm chips with more transistors are i7 QC models... This initial batch of 1onm chips will be 'U' models for tablets and 2 in 1's... you're comparing apples to orangutans.
-
1) Intel planned 10nm to be done with EUV originally. With the delay of that to market, they had to use the older lithography. In fact, the 1-2 year time frame coincides with the release of EUV. Funny how that works out.
2) Intel dropped the tick-tock cadence with 10nm. Why? In part, it was yields. No secret. That information has been publicly available for years. That is why a third 14nm, then a 4th 14nm was planned, with only the fourth version being the same architecture used for cannonlake (in other words, a backup in case the yields were not fixed in time).
"The first 10-nm Cannon Lake chips will be slower than 14-nm 8th Generation Core processors. Intel acknowledged the speeds during the manufacturing event, with a chart showing 10-nm chips catching up with 14-nm chip performance in one to two years." This directly states, not comparing apples and oranges or qualified by the author, that the speeds are lower and will take years to catch up. That discusses heat more than yields, so just is compounding the prior yield issues.
"In theory, the 10-nm chips should be faster than the 14-nm chips. But the first low-power Cannon Lake chips will have fewer transistors, and won’t be comparable to the mature 14-nm chips with more transistors." Either the author has no ****ing clue what he is talking about (possible, especially if he is comparing other than low-power chips), or he is directly stating in the second sentence that the chips, when compared equally, both being in laptops, that the yields are so low it has less transistors. Considering Intel was touting over 25% increase in density due to their process, if it has fewer transistors than the prior generation's transistor count on low-power chips, yields is the only conclusion. If the author is comparing the low-power with a desktop, when the 14nm coffee-lake he said would be in laptops, then he is switching the comparison and makes no ****ing sense. I will take it he has an idea on how to compare (although he doesn't understand compact meaning higher heat, so lower clocks, so maybe I shouldn't give him that), so take his words to mean what they say on their face. On its face, the maturity of a node directly correlates to transistor yield. This means yield issues!
(To be fair to you, this sentence suggests the author is an idiot: "But Cannon Lake is expected to beat its low-power Kaby Lake predecessors—which also have fewer transistors—on performance.").
3) When the author states the change on priority, considering Intel will not release a cannonlake before next summer, the author fails to recognize that the trend for over 5+ years has been to do mobile, mainstream, HEDT and then Xeon (with the latter two released often the same time and originally the first two launched at the same time, then doing mobile first, followed by desktop a month or two later, which is because of optimizing yields before the server clients to offer the most mature process and less waste; this was changed because of AMDs core count which scares them). As such, cannonlake is NOT subject to the change, unless moving Xeons and HEDT up, which makes little sense unless cannibalizing Skylake-E/X significantly on fears from AMD. I doubt that, so I'm guessing it is Ice and Tiger that switch priorities in timing.
So, with the above said, I will gladly stand by my yield and heat comments! -
-
Edit: Also, Coffee lake was originally planned to be similar to broadwell, with more limited lines than Haswell or Skylake. So, the discussion of wider application, I'm guessing that it is yields.
Edit 2: https://www.pcgamesn.com/intel/intel-new-stacked-cpu-design
This suggests 10nm is difficult to use on all elements, meaning problems!Last edited: Mar 31, 2017 -
Maybe Intel is desperate?Last edited: Apr 1, 2017ajc9988 likes this. -
hmscott likes this.
-
The GPU is 10nm, so that's supposed to be AMD?
Maybe Intel is desperate and they can't put out something competitive quickly, so they are forced to Frankenstein a solution with various "parts"?
To me that sounds more like a one-off rat-hole that could create more of a mess than a successful interim solution.
The Ryzen response: Intel have forgotten how to deal with a genuinely competitive AMD
https://www.pcgamesn.com/intel/intel-amd-ryzen-competition
"To me, Intel's ad hoc, scattergun response to the swathes of column inches that have been written about AMD's new chips has been ill-conceived at best and irrelevant at worst. They've almost given up on the recent Kaby Lake 7th Gen Core launch by talking about the 15% performance boost you can expect with the upcoming 8th Gen and missed the point of Ryzen by seemingly bringing forward their new expensive high-end desktop platforms."
Last edited: Apr 1, 2017 -
-
Robbo99999 Notebook Prophet
-
https://www.pcgamesn.com/nvidia/nvidia-amd-difference
Great read!!! LMAO!hmscott likes this. -
tilleroftheearth Wisdom listens quietly...
This is a reply to all beginning from post 1136:
Lol...
I get it; Intel is the big bad wolf that is soon to be eaten by the lovable, docile AMD. Ha!
Nothing that has been stated hasn't been done before by both tech companies (and others too). Is Intel regrouping? Yeah. I sure hope so! But the 'facts' are being taken from a long line of others' opinions...
Unless anyone here works for Intel; nice conjecture. Bravo. Intel is scrambling (I agree) but not to the degree noted by some.
Considered logically:
Intel is still ahead, for the moment (I am not the only one still promoting Intel platforms to my clients...).
Why wouldn't they use every method and process they have at their disposal? While simultaneously working on their next big thing?
While AMD has hit a home run with Ryzen (and seemingly won the game too...), it remains to be seen if they will be able to repeat that process (again and again and again).
Intel on the other hand has a definite plan with their offerings (no, I'm not privy to those plans; I just see their real world implementations just like everyone else). Eventually, nothing seems to be offered at random... and when the pieces do fall into place like they want; they introduce disruptive technologies that leave other companies far behind.
Of course, there is still the business and shareholder side of things... but everyone plays that game... the actual products sold that I can use is what matters; and here, they deliver in spades (to now).
Like I've mentioned before; I don't pay for tech based on the theory, process node or the marketing a company does. No; I buy and recommend to clients tech that has proven in my real world use (and the client's...) to be superior. Period. In that regard, nothing has changed with regards to what I can guess Intel will deliver next. It may not be what I want (then; I simply don't buy it). But it will almost for sure be what I need in the next two or three iterations - if I want to stay competitive, productive and profitable vs. my direct competitors (who will and do buy 'actual' superior tech - not just the promise of it)...
I think that all our opinions are possible realities for the next few years for Intel (more likely; a mix of them).
Focusing on the worst that can happen is reading the 'history' wrong. Sure, it is one possibility.
But in tech, the past has never been very good at predicting the future. Twist those words if you want. But the truth is that whatever 'wrongs' Intel was doing up until now; it doesn't need to continue them. Again; it may for one or more iterations - but it will also be simultaneously working on something else/better too.
The fact that Intel indicates so far in advance that a platform will ship in 2017 Q4 or 2018 Q1 is not a negative. That is a company I can do business with. The fact that it uses 2, 3, 4 or more process dies isn't relevant in the least; the proof will be in the pudding.
What I concentrate on is the real end goal: a platform that is more capable (performance/productive), more efficient, just as stable/reliable and priced fairly - over and above what I already have. If Intel releases a turd, I am not obliged to buy it. If Intel releases something that is lower end than what I bought from them in previous years; I can wait. Especially as there is no real competition (yeah; even today).
If those statements/'facts'/quotes about Intel were indeed true; yeah they seem idiotic. But they are irrelevant too in how I operate.
Am I blindly defending Intel? Nah...
Just showing a blueprint to navigate all the conspiracy theories, biases and other ideologies that may limit others from making logical decisions for their current and future tech purchases.
Aside:
Today's tech started life at least a decade ago. All of it. Even AMD's Ryzen.
There is no company that can jump into new tech with both feet (they can't afford to - not even Intel).
In 2006, AMD released the Turion. I had to try it against my 'older' Intel Core 2 platforms. Not good. This process continued for the next decade, with each new AMD offering that seemed to promise 'more'. Ryzen is the fruit of that goal (to beat/match Intel).
In between, I have seen my productivity increase steadily and impressively. In a decade; that increase is positively explosive! Those 'measly' few % bumps sure do add to a lot - especially with all the random bits and pieces Intel has sprinkled in throughout the years (while holding back the tech they actually have...).
In late 2009, of SSD's, Intel said 'wait for the next few processors' that will need an SSD to shine - they were right (~2011 time frame for me). Today, they're showing Optane (can't wait for `2019 to get here)...
What Intel has conveyed (and proven) to me as a customer (when all the marketing BS is stripped away) is this: we will offer products that enhance real world usage. This, they do in spades.
If or when AMD (or anyone else) can offer the same vision and with the track record to believe it, I'll be in line with $$$$.
Everything else kinda fades into noise...
I have to repeat this again (with the context above):
See:
http://www.gamersnexus.net/news-pc/2845-intel-optane-dc-p4800x-ssd
I almost wish that were applicable to my workflow... I could work 'hard' 2 days a week and be on the beach for 5...Papusan likes this. -
tilleroftheearth Wisdom listens quietly...
Happy April Fools everyone!
-
Now, diverting to discuss Optane is NOT how you win that argument. You stop talking about the topic and distract by discussion of other products. I am even less impressed considering if Intel does do an Ice/Tiger E/X platform, you have to switch MBs because they are FIVR again. So, I find myself seeing both do what they can, but intel is blowing sunshine up a lot of asses here!
Edit: You also ignore Intel pushing back releases regularly now by 1-2Q.Raiderman likes this. -
The problem with the 283% increase is it is dependent on overloading the swap file. In most cases where that type of stress is involved in an every day workflow the system should be built around the idea of enough RAM or other resources to keep away from the swap file.
Now I would agreed these drives though would give Ryzen and Intel High core count CPU's much more capability on lower end server motherboards etc..tilleroftheearth, Papusan, Robbo99999 and 1 other person like this. -
How can a community drum up and show strong buyer interest in Ryzen notebooks, to convince their suppliers to commit capital and good design and engineering personnel soon?
Discuss and socially spread proposed specs, price/performance, and design proposals, as done at crowdfunding sites?
If so, how can Ryzen notebook proposals differentiate themselves for people who aren't yet interested. Is it down to price? Or does Ryzen have some advantages beyond price for some niches? (such as less power consumption due to less AVX hardware and no iGPU.)
Most manufacturers may prefer to wait for mobile parts such as Raven Ridge with an integrated GPU so they can power down the discrete GPU and prolong battery life. Is there a bigLITTLE design with two GPUs that could achieve similar power savings? (Then the GPU would be fabricated on a GPU optimized process rather than a CPU optimized process).
Or is there a large enough market of people who buy portable desktop power and don't rely on battery life? (I think the 'lunchbox' portable computer market is small, but the portable ITX market may grow, just add a portable screen like GeChic 1503H and an external keyboard with trackpad or trackpoint.)
AMD's Ryzen CPUs (Ryzen/TR/Epyc) & Vega/Polaris/Navi GPUs
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.