FoxNews.com - Intel Plans 1,000-Core Processors -- But How Fast Will They Be?
-
Good for servers but pretty useless for consumers. Software is the limiting factor now.
-
i still can't think of anything for which i can use 1000 core processors... maybe the super computers in my uni need that...
-
Trying to write a program whith that many threads would be impossible... I mean there's multithreading and then there's MULTITHREADING. This is the latter.
Frankly, I'd rather they just concentrated on graphene for the moment instead of more cores to make things awkward. Given that graphene can change states 1000 times faster than silicon at the same voltage, it would seem the logical thing to research in to, especially given that they only starting looking at multi core designs when increases in clock speed for silicon became impractical. -
Yes, while trying to write an INDIVIDUAL program to use that many threads would be virtually impossible you could have hundreds of programs running all on different cores.
Open your task manager and check how many threads you're currently using... it's probably quite a few.
This article came out a while ago I thought, or a similar one did (didn't click the link) and I believe the processors were clocked extremely low. -
-
I have around 747. It's not weird. Think about how many processes you have running... and the fact that some of those processes are multithreaded.
-
But yeah, if a consumer computer can use that many threads, how many must a server use? -
Servers probably don't use many more since their job is to just host information. They also have to perform fewer tasks than our computers.
I dono, I've never run windows 08 server or anything. -
It is worth mentioning that you probably would not notice any difference between say... 500 and 1000 cores lol because most of those threads are so tiny and are organized so well by the OS anyway.
-
One day there will be software that can take full advantage of that, believe me.
-
ViciousXUSMC Master Viking NBR Reviewer
Encoding programs like x264 are already designed to split into as many threads as needed to get full use of the CPU.
You would run into a HDD bottleneck with a 1000 core cpu given the cpu cores are a decent speed, and that cpu would be super ultra hot if it was actually running a good speed.
You cant magically make a cpu much faster by adding more cores without adding an equal amount of power draw and heat.
That process is a slow one with each new cpu revision where the architecture gets better and the nanometer processing gets smaller. -
-
And not to get stupid off of topic but heat won't be much of a problem with quantum computers because of the incredibly small parts.
-
there aren't that many problem domain that fits generic multi-core. There are langauge that can already make use of multicore, try search 'Haskell and parallel' if you are really interested.
-
ViciousXUSMC Master Viking NBR Reviewer
You can always use RAID or a SSD but if you took the fastest cpu we have now on the consumer market the 6 core i7 and kept the same power per clock and had 1000 cores it would be wayyy faster than any normal raid or ssd setup. Plus like I said it would be way hotter and power consuming than any computer could handle it would be a super computer mainframe type thing with liquid nitrogen cooling. -
For a better understanding of threads (windows version) see
Pushing the Limits of Windows: Processes and Threads - Mark's Blog - Site Home - TechNet Blogs -
-
Not sure if they are talking about RISC architecture like Itanium series and most super computer CPUs, because if it is, hundreds - thousands of core is not a big deal.
-
this is just research, they meant to test the limits of certain technology, from which the feasible and profitable elements of the technology will then trickle into consumer products. Same idea applies to F1 racing cars.
Intel have said that their Pentium 4 Netburst technology was good for up to 10 Ghz, and 3.8 ghz was pretty much the limit of this technology before it is too unstable/unreliable for use with normal air cooled heatsink.
You can't trust everything a company says, you got to separate the fact from the marketing hypes. -
Missing the bigger picture here. The brain works in a similar fassion. Multiple areas for proccessing different functions. We dont proccess everything in just one part of the brain like a CPU presently does.
This can lead to more automated and naturally functioning systems............. -
-
Im quite sure someone will find some reason to create a program to work with it. Although it seems pretty much impossible. . . it is still . possible.
-
physic simultion games AI picture rendering for professional software wich adrealy makes use of that many cpu through rendering farms
-
Just look at Amdhal's Law, which defines the maximum theoretical scalability for multiple cores. Unless something can be split up 100%, there are greatly diminishing returns when you start to get into crazy numbers of cores.
-
Tsunade_Hime such bacon. wow
I need one of these to play solitaire. -
As TANWare stated... you guys are missing this point.
This won't make individual programs faster but it will make your overall experience faster. One program taht uses 4 cores will have those cores all to itself. One core that uses 1 core will have a whole other core.
There are hundreds of threads right now as you run your computer... each of those could have their own core. -
-
My point isn't that each core will be dedicated tot hem it'll be that you won't have to schedule the threads the same way.
-
Just consider a computer home security that can actually see. Two cores, one each, for stereo cameras. others to control individual camera movement and position, others to combine the images and control depth perception. then cores for image recognition etc. The system could see in 3D just as we do. The same goes for hearing or any other required sences.
You can have a home computer monitoring system that can see, hear and understand threats. When your pet is about to make a mess warn you or watch your elderly parrents take their medicine and call you if they don't or warn of an overdose, recognize a home invasion and the list just goes on and on. multi-cores will open an entirely new world to proccessing systems.
Will this happen tomorow, in no way. can it happen someday, as in everything else, if there is a viable market someone someday will fill it! -
Right now, we use a grid of millions of CPUs all around the world to analyze these events, but in principle, if you had the storage and the RAM, a single 1000-core CPU could do the work of more than a hundred octo-core nodes we currently use. Of course, these cores would have to be better than what is in Intel's current experiment. -
-
-
For those that see benefits from massive parallelism, they have already been solved, not by the software community but hardware. Intel's SSE(i.e. SIMD) is one such example. Same goes for the GPUs with all those shaders.
BTW, even in the DOS days, we know that we are better served if we don't code it linearly(that was what all those interrupt was for). -
Again like the collision example we need to find new uses for home application of this. As my example of a system that can see and truly recognize items etc. as we do in real time. This is just a simplistic example of where massive parallel programming and processing can be used.
In the DOS days we walked away from goto to gosub, that became modular rather than linear, a first step. When we went to destructible objects is to what I refer. We may now eventually see this as destructible processes giving even greater meaning to object orientation. Just imagine starting a process then just let it do its thing as long as needed and it will be fully self sustained. Just like our brain can reprogram itself as needed or even as we learn. It opens many possibilities.
In the state of the art as we are now, programming for linear CPU’s, then Moore’s law does not really apply like the OP’s quoted post. You really have to start thinking differently and to the possibilities that it may yield.
This also is why we need the slow migration of CPU’s to mass mulitcore. If overnight we got 1000 core CPU’s with all of them running at 12 MHz and super small L1's there would be mass consumer rejection (mathamatically equivelant to 3.0 GHz quad core). Slow migration is without a doubt the best way to go. -
i agree with TANWare
if we are going to use today's or yesteryear's standards of linear programming, then having a 1000core cpu would be meaningless. but of course technology never stands still. and sofware adapts to hardware for the most part, its just a matter of time.
we would probably have to go into something like neural programming (networking?) for this to be feasible. our brain operates like a billion core cpu; with each neural cluster doing something different. on a linear task our brain would be beat by even the crudest computers made with vacuum tubes. but on a highly parralelized/neural task nothing can beat them. thus we are able to walk; to see; to store what we see; and comprehend - all at the same time.
a 1000core cpu with highly parallelized/neural programming could allow a robot to walk or see. something it could never do effectively with linear programming.
it could also allow consumers to finally experience virtual reality. imagine playing a game where all 3d elements had a core to itself; or viewing architectural designs where each 3d element had a core on its own and could be manipulated in realtime.
the applications could be boundless actually. -
We already have similar massive multicore today in the form of GPGPU(in most mid-range GPU card) programmable via OpenCL or CUDA.
-
-
Wow, some good content guys, +1.
Funky, just trying to point out that writing a program that uses a 1,000 threads isn't that hard to do, (must be easy if I can do it). Here's just a quick one I made up that uses 160,000 threads, not that they do a lot, just a proof of concept.
As someone suggested, if such processors become available then it will probably change the way software / OS is written and how threads are utilized. It's usually hardware first then software to follow, in some cases years later. What would be a <s>good</s> example of high multi-threaded use, umm... What about software that accepts connections on a network port. You might want to run the same code for each connection whether that would simply be serving data or interacting with the client. By having created a thread for each connection that code can be run concurrently for each client connection with shared access to the linear address space of the program. You could probably run a large number of software threads per core, what would be a good number for software threads per core would depend on the code and acceptable performance.
Perhaps though you mean to have some program that can fully saturate all cores. Maybe some of the number crunchers could do this. Having a program possibly crunching what took a week on 4 cores in less than an hour on a thousand would be a significant improvement. Again as already mentioned in the forum is scalability. Today the best program I can think of that can test scalability of Intel cores mathematically and to the limit would be Linpack, a benchmark derived from solving linear equations. It does show some indication scaling from 2 to 6 physical cores, hyper-threading actually shows a degrade in performance.
It also seems to support running 1000 threads.
Now where's that 1000 core CPU. -
Although giving that as an example may seem a little narrow minded, in everyday life there's little need for using such programs. I agree, there is a use for huge amounts of cores if you have a program which could use that many to a large advantage. Also, you have to factor in the fact that although th total processing power would be much larger, the clock speed would probably be much lower otherwise the whole thing would melt.
My general view on this is that it's worth researching, just that it's only really going to show its true worth in a few specific scenarios. If you're a scientific research centre, chewing through protein folding programs and stuff then there would be a point, but to the average user (either at home or at work) the advantage would be pretty much wasted.
Intel Plans 1,000-Core Processors
Discussion in 'Hardware Components and Aftermarket Upgrades' started by JWBlue, Nov 24, 2010.