Bad news for people on the Intel platform: NO 10NM UNTIL 2021 EARLIEST, 2022 DESKTOP.
https://www.pcbuildersclub.com/en/2...-in-desktop-before-2022-and-maximum-10-cores/
https://segmentnext.com/2019/04/25/intel-cpu-roadmap-comet-lake-s/
https://semiaccurate.com/2019/04/25/leaked-roadmap-shows-intels-10nm-woes/
https://finance.yahoo.com/news/intel-10nm-desktop-dreams-may-124030746.html
-
-
tilleroftheearth Wisdom listens quietly...
Bad news? Yawn.
See:
http://forum.notebookreview.com/threads/intel-kills-off-the-10nm-process.825515/page-2#post-10901523
As long as there are choices, bad-news is like most marketing/clickbait propaganda; made up.
-
-
Intel Qualifying 10nm Ice Lake CPUs, Expects First Full-Year Revenue Decrease In Three Years Tomshardware.com | April 25, 2019
https://www.tomshardware.com/news/intel-earnings-10nm-ice-lake,39178.html
https://www.pcworld.com/article/339...ges-will-never-happen-again-on-his-watch.htmlLast edited: Apr 25, 2019Vasudev, lctalley0109, Robbo99999 and 3 others like this. -
I actually think that leaked road map is too bad it's unbelievable.
That it starts at Q12018 is a bit fishy too. It might be from 2017 or a worst case scenario based on the capability of their old broken 10nm before that was all but officially canned.
It'd mean Intel's been misleading on the market on their revised 10nm "not+" progress if we get nothing but low power dual and quad core laptop and tablet chips by the end of 2021. Even the new CEO would be implicating himself in the continuation of the lie, when upper management changeover provides the single best opportunity for uncomfortable revelations to the market (that the new guys can blame on the old guys).Robbo99999 and hmscott like this. -
And, as mentioned, it is rumored to have been decently recent. Intel refused to comment on it either way. And Intel may have thought or believed they figured it out when the statement was made and never updated the statement.
This also gives weight to the rumor that Samsung will be producing Intel cards on 7nm Samsung process. They do show an integrated 10nm iGP, but that would not be nearly able to do a large die dedicated card, especially since they don't have large die CPUs listed.
Just some thoughts. Another thought is if Intel misses getting 7nm worked out by 2022-23, they may be forced to go fabless. That would mean they would have to rely on TSMC or Samsung. Now, that is a scary thought, only due to further fab consolidation. But that is too far out to predict! Also, EUV lithography WILL be available for them by then, so if they get the cobalt integration figured out, they will be right on track. Just 10nm, without EUV, seems to be DOA.hmscott likes this. -
It also seems that the new Intel iGPU technology only comes on the 10nm ULV CPU's coming end of the year, as the 14nm Desktop / Laptop CPU's still have the current poor performing onboard iGPU's - maybe another reason why the KF / F CPU's are arriving in force - most have no need for the iGPU on high performance desktop / laptop's that have dGPU's - it took Intel far too long to figure this out.
The low cost Ryzen APU's with onboard GPU far outpace the Intel highest performance CPU's in gaming FPS, with the 9900k + onboard iGPU being almost unusable:
http://forum.notebookreview.com/thr...-lake-cpus-z390.811225/page-149#post-10900296
No matter what Intel finally delivers for 10nm, 2x more than hardly anything isn't enough to make a dent in the bottom line or increase market share back from AMD for 2019 / 2020.
Hopefully AMD will be wise and take this opportunity to build bridges with vendors to deliver more laptops and desktops with Ryzen CPU's + Radeon GPU's.
We need a strong competitor to Intel for progress to pick up again - continue to pickup as from AMD's Ryzen push - and hopefully Intel will catch up down the road.
The Intel Datacenter sales drop-off a cliff seems doubly large to me. Is the market really saturated at such a high level, or are customers holding on to 14nm technology that is already dated for longer than the usual upgrade cycle - waiting for Intel to deliver on 10nm (7nm?) and AMD's 7nm DC solutions that promise real performance and cost improvements - instead of wasting more investment in 14nm silicon with architecture security vulnerabilities?
If 10nm CPU's don't get rid of the 14nm security vulnerabilities or the accompanying performance hits the DC's have been suffering, DC customers are going to have a good reason to jump ship to AMD that has fewer vulnerabilities and less of a performance hit mitigating them. If performance alone won't make DC customers jump ship from Intel, then AMD price + fewer vulnerabilities and fewer mitigation lodestones together might.
Why Intel Is Slashing Its Sales Forecasts
Bloomberg Technology
Published on Apr 25, 2019
Apr.25 -- Bloomberg's Nico Grant and Sarah Ponczek break down Intel Corp.'s first-quarter results on "Bloomberg Technology."
If DC customers aren't buying, where is that "redirected 14nm production" going? As I've said before I've doubted this whole "Intel CPU shortage due to redirection to DC demand" tall tale.
I think the numbers from Intel are showing there is simply no demand in the DC or consumer realms - AMD is gaining market share and Intel has fewer placements of their products in a quarter after quarter market share decline.
Sales shortfalls in a mature company like Intel with market and production dominance are due to customers not buying product, not because production can't keep up. Intel has dropped that story now, and is starting to give a glimpse of the true situation, customers aren't buying product.
What is going to happen when reality is no longer obscured and the truth comes out?
Last edited: Apr 26, 2019ole!!!, Vasudev, lctalley0109 and 4 others like this. -
-
-
-
Robbo99999 Notebook Prophet
-
-
Now with Hybrid packages - now chiplets - it's a matter of multi-layer communication close to the CPU die, but off-die to maintain separate power / cooling - with fancier IHS "heat-pipes"(?) - on separate silicon.
Intel's on board iGPU shouldn't count in a discrete GPU tracking comparison, that would make those tracking numbers more sensical without Intel.Arrrrbol likes this. -
With that said, Intel also has the IMC and IO on die, which is hard to shrink. But, Intel only has three lines to produce 10nm and just invested heavily in expanding 14nm, meaning there is a need to recoup cost and they have extremely limited capacity for 10nm lines at the moment.
As I explained in a different thread, details on the leak say it was a SIP roadmap presented to Dell, meaning commercial machines, not consumer machines, which lags behind by a couple quarters. Then there is a question of how current the map is. So let's give the benefit of the doubt and pull all products in by 1-2Q. That puts comet lake at Q4 to Q1 2020, which around Q3-Q4 were the rumored for comet lake anyways, with the possibility of slipping to Q1 2019.
With that explanation, does it seem more possible now?
Sent from my SM-G900P using Tapatalklctalley0109, bennyg, jaybee83 and 1 other person like this. -
Robbo99999 Notebook Prophet
ajc9988 likes this. -
So, I misstated, if being absolutely critical.
Above on the client mobile roadmap, you will see Ice Lake U in a 2c/4c offering that is limited coming out this year. In fact, it says half way through Q2. Then, you get Tiger Lake U in 4c offering on 10nm in Q2 of 2020. You also get, in Q3, Rocket Lake U 4/6c with 10nm graphics. On the second image, the Client Commercial roadmap, you can see NO listing for 10nm until Tiger Lake U and Y in Q2 of 2021, both in 4C variants. Allegedly, from reports, certain Xeon E parts may get 10nm.
So, long end short, nothing above 4C 10nm is planned until after 2021, which places it in 2022. Intel should have 7nm close to around then, so it seems they are skipping 10nm entirely.
Now, for anyone here, do you want 4C or less? Do you want a "U" or "Y" series low power chip? Otherwise, no, there is NO 10nm planned.lctalley0109, Robbo99999 and hmscott like this. -
Since you are so sure.
lctalley0109, Robbo99999, tilleroftheearth and 2 others like this. -
But, with little to the contrary, I'll go with the leaks. Not only that, Intel should have 7nm finally ready for volume in 2021, meaning in 2022, they would have those chips, which may be the largest jump in performance on Intel's side in a long time, if ever, if they are going from 14nm straight to 7nm.
Also, as I have said, that is process tech. They will continue improving architecture over that time period. But Intel being stuck on 14nm, after being late with 14nm then giving Broadwell, I'd have to say yes, they are having huge issues. -
Robbo99999 Notebook Prophet
-
Sent from my Xiaomi Mi Max 2 (Oxygen) using Tapatalk -
1) the desktop roadmap refers to SIPP. That is commercial client deployments. That lags consumer releases. That means you need to subtract a quarter or two from some of the chips to determine consumer releases. Tom's hardware says 9 months, but I wouldn't go that far. They are counting each quarter as 3 months, but are ignoring Intel will release comet lake in October with wider availability months later, always with the possibility of slipping a quarter.
2) the adoption of Vulkan and DX12, along with ryzen optimization advancements means as time goes on, newer games will scale better. DX12 has CPU optimizations to more parallelize the workload, allowing better core scaling. Although single core is important still, it does improve performance on higher core counts. Vulkan is similar. And, as we've seen with Ryzen game optimizations in certain newer titles, it has added performance where an 8 core has concrete benefits over a six core chip. These will only continue with time.
3) comet lake may have sunny choice architecture advancements. Looking at each revision in architecture, meaning Ivy to haswell and broadwell to sky lake, Intel has improved IPC around 11%. This isn't overall performance, just IPC.
This least point is important. AMD is looking at 11-15% IPC over zen and zen+. Zen is around 7% lower IPC than Intel's current offerings. So that means AMD should have 4-8% IPC over Intel, depending on workload. If Intel releases comet lake with an 11% IPC gain, that would put them 4-7% ahead on IPC again. This means AMD really needs frequency improvements to keep the single core lead (they already win for core count and multi threaded).
Now power consumption to achieve the performance is where Intel will really be hurt! But that is a different story.
Sent from my SM-G900P using TapatalkAshtrix, hmscott, jaybee83 and 1 other person like this. -
Robbo99999 Notebook Prophet
ajc9988 likes this. -
yrekabakery Notebook Virtuoso
We already saw that happen this gen starting a few years ago, with games like BF1/BFV, Watch Dogs 2, and AC Origins/Odyssey making 4C/4T CPUs obsolete for 60 FPS. Thank god for Ryzen giving Intel the kick in the ass it needed to give us more cores on mainstream/non-HEDT CPUs.Last edited: Apr 29, 2019lctalley0109, Ashtrix, ajc9988 and 2 others like this. -
Robbo99999 Notebook Prophet
lctalley0109 and ajc9988 like this. -
But, even those games mentioned, many of them didn't get Ryzen CPU optimizations until Feb/March of this year.
But, definitely gotta be happy for the kick in the pants. It will be interesting to see the Comet Lake 10 core against the Zen 2 12-core. With the rumored 15% IPC, it would still take a Zen 2 based 8-core likely 4.7GHz all core to match a 9900K clocked to 5GHz. The engineering samples only do 4.5GHz, so unless they got more gas in the tank, it would still take the 12-core to beat the 8-core Intel chip. And that is assuming the 4.5GHz is all core OC, not single core boost.lctalley0109, Robbo99999, jaybee83 and 3 others like this. -
lctalley0109, ajc9988 and hmscott like this.
-
Personally, I feel any mainstream CPU at 8-cores and above, regardless of AMD or Intel, has two primary consumers: 1) game steamers, and 2) entry level to intermediate content creators, like YouTubers, hobbyists, etc. The second group is based depending on the amount of content produced, the quality necessary for the content, etc. As they grow, they will likely move toward HEDT builds as it makes sense. But this will help get them used to platforms before that point. And due to more cores and higher performance graphics cards in the industry at large, there is a burgeoning of steamers and content creators online taking advantage of the lower cost to entry.
But, "that is like [my] opinion, man." (Big Lebowski).
Sent from my SM-G900P using Tapatalk -
I would not say 8-core CPUs are only useful for streamers (who can actually use Intel's QuickSync or Nvidia's NVENC to encode the stream, which would be cheaper and give better performance) - gaming consoles right now use 8-core CPUs so many game developers have to learn to optimize games for maximum scaling for those 8 cores. And this will also affect PC ports of console games or games which are released both on PC and consoles, especially MMORPG games with large PvP battles where you have to do A LOT of CPU calculations for things like position and action of each player (there can be 100's of them in single battle) or physics calculation for large destructible structures without relying on proprietary APIs like PhysX (for example, the upcoming Camelot Unchained MMO has its own proprietary physics engine that is not using PhysX or other existing APIs).
Now the 16-core is a little bit excessive. Although if AMD would've convinced console manufacturers to use such CPU for next console generation - game developers would've quickly found the ways to use all those extra cores to maximize the performance ;-)Arrrrbol likes this. -
Can't we just look at what workloads the current 16 core CPUs show good scaling with, and make allowance for better gaming performance on non-NUMA/non-mesh memory subsystems ?
I get why AMD are not rushing to release 12/16 core mainstream chips... they will just cannabalise sales of their own 12/16 core Threadrippers which will force firesale prices like what we saw at EOL of the 1900X
If Intel keep the same socket for the rest of their 14nm desktop CPUs... Maybe one day for teh gits n shiggles I'll drop a 12 core in this thing that was originally sold with a 2015 quad core ... 1151 may end up longer lived than AM4 ROFL.lctalley0109, Papusan, jaybee83 and 1 other person like this. -
Rei Fukai likes this.
-
ajc9988 likes this.
-
Sent from my SM-G900P using Tapatalk -
No, I only tried streaming using dedicated hardware encoders. I can see how this program would be useful on multi-core CPU since it can apply persistent affinity to individual apps and games, so this should work great for 16-core CPU users.
ajc9988 likes this. -
tilleroftheearth Wisdom listens quietly...
For the stated use cases, two 4 Core or 8 Core platforms will be a lot more productive and future proof than any single 16 Core platform you can buy today, even if each setup comes with 64GB RAM for every 8 cores used.
Having 16 Cores today is the equivalent to racing stripes and flame decals on '90's cars...
I've been hearing for almost three years now how AMD will change the PC landscape with their high core offers.
Yeah, still waiting.
There is a reason that Intel didn't lose 80% (net income) as AMD did, but that side of the business is conveniently ignored by the AMD blind allegiance here. That reason? Intel is still delivering more performance period.
The fact that Intel can do this with their oh so old process node(s) is even more noteworthy. But I'm sure I'll be told how 'out of touch' they are again.
It may look like I'm bashing AMD and/or putting Intel on a pedestal. But people, come to your senses, you nor I can influence the numbers that matter one iota. When Intel is surpassed, I'll be the first to admit it.
But continually coming up with imaginary uses for 16 cores when there's still not much use for them (especially in this topic/context), is getting a little tiring. -
We can revisit this in a month to two months, then again in about five months or six months if comet lake drops around then.
Part of the issue is software designers not adopting parallelizing workloads. As we all mentioned, games are only starting to regularly get scaling beyond six cores. Adobe actually went backwards on scaling on some programs, whereas their competitors actually do scale, but are less used (compare premiere and resolve for video editing).
The use of 12 and 16 cores or above is understood for professionals. Consumers, until now, have not had such a luxury. Because of that, and how consumer software is designed, you do have a point on consumers looking for ways to use all that extra power, mainly because software companies haven't designed their products for the commercial space, instead focusing on lower core counts. That is a temporary issue, fixed through changes that are coming.
Hell, with the optimizations on a game like Civ VI gathering storm, the AI processing for later game play on my 1950X now demolishes ANY 8-core chip out there. Once the programming is in place, your comment will not age well.
Sent from my SM-G900P using Tapatalkbennyg likes this. -
tilleroftheearth Wisdom listens quietly...
Please stop trying to pigeonhole me into a single workload. I have always stated my workloads are varied, but that they were 'most' like PS. Not only do you not know my actual workloads (I would be a fool (yeah; competitors) to divulge them fully and publicly), but most here have an instant bias that seems like anything or anyone that seems against AMD or is pro-Intel is to be ridiculed, instead of engaged in a conversation. Not only are most of my workloads not based on Adobe products currently (and for quite a while now), but they are also made up of custom and proprietary code too.
Let's try this again: I'm pro-productivity, period. I support the hardware that actually increases my productivity at the end of the day, not how much I'm liked in a little corner web forum, or how good the 'scores' look on mere synthetic tests. Yeah, I would love to revisit this in a month, half year or even another three years from now, but I don't see the movement needed and required to move beyond 8C/16T in any meaningful way in this time-frame, just like I predicted in another thread so many years ago too.
Professionals who actually need more than 8 Cores were always served well enough, and right now, they have great options of choice. For that, we can thank AMD. But if productivity is their goal and they have a normal, varied workload, just like I do and most of the people I know do too, then mere additional cores are not the answer, even today. This is known by most of the professionals in my circle, intimately.
At the most? One, two or even three (high and very) high core count platforms are commonly used much more efficiently in an organization vs. having every workstation be capable of the highest demand process, yet come in second, third or last in their most used processes/workloads. It just makes sense because that is still the reality of how software currently and for the foreseeable future, works.
Stating that software designers are slow in adopting and utilizing these additional (sometimes available) cores misses the point. I don't go to my suppliers and whine about how I wish things to be and then go about buying and configuring products based on those imaginary wishes. I tell them to provide me with their ultimate platform example and then I test it in my environment. Either it flies or it dives. Next.
While it is nice for consumers to have access to multicore platforms that resemble the best of a few years ago at cheap prices, that has always been the case. Are they now getting mostly on par with those older platforms and in some ways surpassing them? Great! But that doesn't make a multicore platform the 'go-to platform' either, for most.
My comments will age well because I don't say these changes will never come. I'm saying they are not here yet. And until they do, these computers are frequently toys for most because there are other, more suitable and finely tuned options out there and for much less too.
Will a $1K processor 'demolish' a $500 8 core chip in a single example as you've given? Yeah and yawn... (and, I'm simply taking your word for it) of course, it will. When I say the same thing about Intel's offerings for my workloads, why am I wrong then?
I hope that the very slow momentum for (very high) multicore support and especially for parallelizing software workloads in the last three years or more will accelerate at a much more
When a current desktop runs my workloads slower than what I had as a 'mobile' workstation years ago, that is not something I will pay for.
And, I have always agreed that, once the programming is in place, we'll all be singing from a different songbook.
But then, so will Intel.
-
Now, with that running critique out of the way, let's get to where you are absolutely correct. Getting the machine right for the job. That is why awhile back I recommended doing a cheap amd capture machine and an Intel 6-core for streaming rather than getting an 8-core. At the time, there were basically no games getting much scaling for the two extra cores. This is just an example, as you've seen me have vastly different recommendations on server and other workloads.
Finally, we are starting to have increased parallelization on programs allowing them to scale further. That is only going to grow as it is more common in mainstream and HEDT chips as well as server. Chiplet is coming to everything. And because, at least until graphene or similar is viable, we are hitting frequency limits on current technologies, as well as costs of miniaturization increasing, we really only have one choice: moar cores! That is just where we are at on computer tech.
Now, if doing purchases for business or as an enthusiast, upgrades happen quicker, as in once it is necessary or makes fiduciary sense for businesses on TCO and ROI, or a jump in performance for enthusiasts. But, the overall trend is that ordinary people are buying systems then holding onto them for longer periods of time. In part this is due to the global economy retracting, in part it is increased costs of systems while wages are stagnated, creating a system where it is harder to allocate funds, in part due to proliferation and higher costs of phones and mobile devices (tablets, laptops) causing people to have to plan which device is upgraded when, etc.
Now, with that last point, that is were we disagree and agree. When it makes sense for us to upgrade, we do it. I rely more on CPU multi threading, which is why I still have a 980 Ti paired with a 1950X in my workstation (the extra 60% to grab the Intel 16 core wasn't within budget at the time, not saying it is a bad product). Your workloads seem lighter on multi threading by comparison, but benefit from high frequency single thread work. Nothing wrong with that. And for systems in various work environments, you might want to have a 9900K sitting by a 2990WX machine loaded with 8 graphics cards.
But, for average consumers, 5 yrs + is happening more often. As such, and the stated info above, higher core count will age better.
Now, we both have our caveats on stated goals. And, depending on where you for in on purchasing, different goals males sense. But not always is it better to buy best for today, while ignoring where things are going, especially for non-business use consumers.
Sent from my SM-G900P using Tapatalkrlk likes this. -
For that matter, even single threads are not purely sequential at the instruction level, and haven't been for ages. Pipelined instructions are themselves parallel (control parallel, not data parallel), and it's important to issue instructions in a way that doesn't break those pipelines. If you don't use assembly, you're relying on the compiler to do that work for you, but it's still there.
Bottom line, getting better performance will intrinsically require greater use of parallelism, be it at the data or the task level. -
It's not "still the reality of how software currently and for the foreseeable future" can't take advantage of lots of cores. That may be the case for your field, whatever it may be. Or at least for the custom and proprietary code that may not have been updated. But for my field -- software development (working on Kubernetes/OpenShift by day, other various FOSS packages off hours) -- having lots of cores means that builds run that much more quickly, and testing (when written in a way that's not inherently sequential) also completes more quickly. -
tilleroftheearth Wisdom listens quietly...
ajc9988 and rlk,
My workloads are not ambiguous, but neither are they 'standard' to anything you or my competitors may like to think or imagine. To understand my workload is simple; I process lots of high-resolution images and transform them into something my clients want. Now, do you know my workloads? No, I didn't think so. But I'll repeat once again; slow high multicore platforms are the bs empty promises around here.
I never stated that your workloads may or may not benefit from a high multicore platform, but I am stating that the majority of users won't benefit (and haven't for the last three years or so either). Pushing AMD or Intel high multicore platforms as future-proofing at the expense of the present day, real-world performance is kinda sad to me. Those kids don't know any better.
Finding software that takes advantage of all those cores efficiently and better than a lower core platform is also a little disingenuous too. I would even say that is one more reason why PC sales have stagnated. If 'upgrading' gives you less performance at double the core count, why not wait for the 112 core platform to waste my $$$$$ on?
Do I not want actual productivity workloads to be more parallel? Of course, I do. That is the future. Still doesn't bring it here today.
Let's talk about concrete facts, shall we? I've been saying a version of the above for almost a third of a decade now. What has happened in that time with regards to parallelism in workloads/software that we didn't have before then? This is the only question that needs to be answered. Everything I've been saying rests on that.
Given that slow, glacial progress over the last two or three years (if we give time for AMD's products to be in the hands of the devs...), I really hope that the next equal time period shows exponential results.
But coming back to what you buy today? You always buy the most powerful system you can for your current workloads. Predicting the tech future is a good way to go out of business.
Here's my concrete example: when I joined notebookreview almost 10 years ago looking for info and real-world tests to decide whether SSD's were in my immediate future, I was told to just shut up and buy them. A few years later, I found their use case; OP'ing by almost 50% at the time. The proof that I offered then still wasn't enough to stop the ridicule from the naysayers. This is no different, but now, my 'bs ambiguous workload' is what is attacked, instead.
Instead of trying to build yourself's up by tearing me down, try answering the questions above and below that I pose to the whole forum too.
What has transpired over the last few years that has made buying a high core count platform a requirement? And without wishing and speculating on the future, what (if there is any real reason) is buying one today being 'future-proof' to 2024?
Because from where I stand, in 2024 I won't have a single tech item that I'm currently using today.Kyle likes this. -
custom90gt Doc Mod Super Moderator
I'm going to just step in real quick and remind everyone to be civil to each other.
Having said that, in my opinion, the blanket statement of buy the fastest you can afford doesn't make sense to me. There are so many factors you have to weigh when purchasing a system that you can't make a simple statement like that. You have to look at budget, desired longevity, current/desired usage, cost of downtime, etc...
When I was doing custom builds, I would take a ton of time trying to figure out what the customer actually needed. Sure I could have just thrown the fastest processor and what have you in there, but that would have been a disservice to the customer. There is no right answer for a build for everyone, that's why there are so many different parts out there.hmscott, toughasnails, ajc9988 and 2 others like this. -
The key to optimizing any workflow of this nature is to minimize the amount of work that needs to be serialized, and in particular, the amount of time required by a human. That means making the human interaction as fast as possible, even if it means more back end processing. It's worth analyzing one's workflow carefully. Your actual processing steps might be very different, but it's worth doing the kind of analysis I'm outlining below.
I have a schematically similar workflow in my avocation (sports photography for my alma mater) that I've put a lot of work into optimizing. I typically take about 2000 frames and keep 300-400 of them, which I upload (you can see this at https://rlk.smugmug.com/Sports). The steps amount to:
1) Shoot the game.
2) Offload the photos onto my system.
3) Import the photos into my image management system (KPhotoAlbum).
4) Review the photos and select the ones I want to keep.
5) Crop and rotate the selected photos.
6) Apply a watermark.
7) Upload the photos.
Step (1) of course is on the game time. Step (2) is sequential; it's limited by the I/O throughput (but if I had a fast enough card, it might be worth investigating parallelizing that, to achieve a deeper I/O queue depth.
Step (3) is partially parallelized, helped by some coding I did to partially parallelize checksum computation and thumbnail generation (so there's some data parallelism and some control flow parallelism there) in addition to using an I/O scout thread to pre-read the images into memory. With a fast SSD, it would be worth increasing the number of scouts to improve queue depth, but I don't have an NVMe drive to tune that. More threads might allow greater parallelism of thumbnail and checksum computation if I had an NVMe. Between this and some other improvements, I'm basically I/O limited on a SATA SSD and am completely I/O bound to a hard drive.
Step (4) is, of course, sequential, although KPhotoAlbum preloads images so I don't have to wait to skip to the next image. This is also human-intensive; KPhotoAlbum lets me tag images with a single key and use the space bar to move to the next image (being able to tag-and-next-image in one key stroke might have benefit). This steps is one of the two time-consuming steps, in this case because I have to review a lot of images.
Step (5), the processing step, is partly on my time (decide on crop and rotation) and partly computation. There are two basic apps I can use for this, Darktable and RawTherapee (on Linux). I use RawTherapee because the crop workflow is faster; I can do it with one click-and-drag rather than having to position the mouse in the corner and do it in more steps. It's about 5 seconds faster per image because of that; with 300 images, that's not negligible! This is the other time consuming step, and I'd like to see what I can do to further optimize it.
But actually applying the crop, rotate, and watermark (step 6) is something else. Neither Darktable nor RawTherapee efficiently parallelize image export. They can perform certain operations using multiple threads, but not multiple images simultaneously. So I wrote a script that extracts the crop and rotation from the sidecar files generated by RawTherapee and use ImageMagick to apply the crop. This part is parallelized; my script processes multiple images simultaneously. That saves about 10 minutes processing a typical game.
Step (7), of course, is network bound.jclausius, ajc9988 and tilleroftheearth like this. -
tilleroftheearth Wisdom listens quietly...
You have a pretty good workflow for a single 'shooter', single user workflow.
I make workloads as parallel as possible by using multiple 'shooters', dozens of workstations and multiple staff.
Multiple workstations are much more productive than a single monster workstation in my experience - especially when a workstation goes down (and they will and they do). 'Chunks' of each shoot are processed on multiple NAS and even more workstations and the entire job process stops when/if the 'perfect' required/contracted images are processed and recognized early on.
Machines simply can't replace humans when selecting 'keeper' images, except for things like focus, etc. If they are used like that, sooner or later the images just all kinda look the same (this was tried and abandoned already). I doubt that this will change in my lifetime, or at least for my clients' needs.
When I was also shooting not that long ago, I would capture up to 1K images per 10 minutes, continuously for hours. And I seldom shot alone. When the shoots were (known) to be shorter, time-wise, 2K+ images per 5 minutes was easily reached, per photographer.
The less an image is retouched, the more life it has (yeah; even RAW images). After effects are more packaging and getting some special images print-worthy at the sizes requested, but seldom significantly slow the above process anymore. The cameras are used to create the 'feel' contracted. The software is only a safety net.
-
Yep, that's certainly one way to do it
It sounds like a highly tuned workflow for your needs, and that you're doing even less post than I am. I agree that parallelizing the photographers and having the fastest processor for a very simple workflow -- which likely does mean high clock rate and low core count -- makes perfect sense for what it sounds like you're doing.
-
Intel Process Technology Update: 10nm Server Products in 1H 2020, Accelerated 7nm in 2021
10NM Ice Lake shipping in June for laptops?
https://www.anandtech.com/show/1431...r-products-in-1h-2020-accelerated-7nm-in-2021
https://www.reddit.com/r/intel/comments/bmaslc/intel_confirms_10nm_to_be_released_this_year_10nm/
I'll be holding onto my 9900K until 2021 me thinks. 10nm will be great an all, but 7nm should be a healthy jump for me. Looks like AMD will a short lead over Intel, but won't be nearly long enough to dramatically shift market share.Last edited: May 8, 2019ajc9988, tilleroftheearth and Robbo99999 like this. -
Robbo99999 Notebook Prophet
Papusan, joluke, Talon and 1 other person like this. -
We already know comet lake is 14nm. Unless it's successor is 10nm, which there is little to no guidance on, and which would be a year after comet lake (read late 2020), then it would be late 2021, which may be the 7nm chips, which could slide to 2022.
So, the only question is if Intel will skip directly to 7nm for mainstream desktop. Nothing they said discredits my prior analysis.
Sent from my SM-G900P using Tapatalkhmscott likes this. -
Intel's latest road map shows 10nm this year/next month, 10nm+ in 2020, with 10nm++ and 7nm in 2021. Exciting times ahead for all. Somewhere in that mix we will see desktop chips obviously.
Papusan, hmscott, Robbo99999 and 1 other person like this. -
As far as I have seen so far, the only Ice Lake 10nm CPU's are ULV Quad Core CPU's, nothing like a 6c/12t or 8c/16t model. Who wants a 4c CPU in this era of higher core count consumer CPU's?
These are supposedly higher yield versions of the same 10nm process used for last years low production 10nm ULV CPU's that had disabled iGPU's, maybe this year the iGPU's will work? I doubt that 10nm process will match current 14nm IPC or performance at the same clocks, maybe it will match it?
Does anyone see a desktop 10nm part on the charts? I didn't see any such listing. It wasn't obvious to me, and I don't think Intel is sure enough to even suggest a wish date for delivery for 10nm desktop or H level laptop CPU's.
Based on the 10nm/7nm overlap with no 10nm desktop parts showing, and no 7nm desktop part showing, I still don't know what to think as far as Intel finally delivering any kind of useful 10nm / 7nm desktop / H laptop CPU's.
To me it looks like a nicely filled out chart with 10nm / 7nm BS sprinkled in between the real 14nm production runs, in the same way as the last 3-4 years of missed deliveries for 10nm production promises.
The only difference is that now Intel has added 7nm to their wish list.Last edited: May 9, 2019
Intel's upcoming 10nm and beyond
Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Apr 25, 2019.