The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Valve doing multicore in the source engine

    Discussion in 'Gaming (Software and Graphics Cards)' started by Cerebral, Nov 5, 2006.

  1. Cerebral

    Cerebral Notebook Geek

    Reputations:
    4
    Messages:
    88
    Likes Received:
    0
    Trophy Points:
    15
    Saw an interesting article that was linked at Planethalflife. Valve seems to be introducing multicore support into the source engine.

    http://www.bit-tech.net/gaming/2006/11/02/Multi_core_in_the_Source_Engin/1.html

    Guess its good news for all those of you that are waiting for quad cores to come out :). Too bad it will probably be a while before we see quad cores in notebooks. But the performance increase seen in the quick test on a dual core was impressive i thought.

    Thought some of you might like the read ;)
     
  2. metalneverdies

    metalneverdies Notebook Evangelist

    Reputations:
    151
    Messages:
    314
    Likes Received:
    0
    Trophy Points:
    30
    nice find!
     
  3. Notebook Solutions

    Notebook Solutions Company Representative NBR Reviewer

    Reputations:
    461
    Messages:
    1,849
    Likes Received:
    0
    Trophy Points:
    55
    Great post thanks. I also like your avatar Cereberal :)

    Charlie :)
     
  4. sionyboy

    sionyboy Notebook Evangelist

    Reputations:
    100
    Messages:
    535
    Likes Received:
    0
    Trophy Points:
    30
    Good to see Valve constantly upgrading the Source engine. I think this will pave the way for new Havok engine in Source, hopefuly getting one thread to run physics whilst the other deals with game logic, AI, sound etc. What Remedy are doing with Alan Wake is brilliant, hopefully we'll see more game developers taking advantage of new Dual Core systems.

    Its a shame that they can never meet deadlines on their games though, Ep 2 not until next year now....I think they should just stick to "when its done" rather than give estimates to the press.
     
  5. Jalf

    Jalf Comrade Santa

    Reputations:
    2,883
    Messages:
    3,468
    Likes Received:
    0
    Trophy Points:
    105
    What, making it unable to run on anything with less than four cores? ;)

    Anyway, we'll see how it goes. I for one hope that they've got better programmers working on it that Gabe "I've never touched a line of multithreaded code in my life" Newell.... ;)
     
  6. sionyboy

    sionyboy Notebook Evangelist

    Reputations:
    100
    Messages:
    535
    Likes Received:
    0
    Trophy Points:
    30
    But of course, its good to see developers making progress, even if it does mean those still with DX8 cards and single core processors are left out.

    /looks at Pentium M....
    //Looks at MR X1300.....
    ///Cries....

    Ahh well, roll on 360 die shrink and price reduction !
     
  7. hydra

    hydra Breaks Laptops

    Reputations:
    285
    Messages:
    2,834
    Likes Received:
    3
    Trophy Points:
    56
    I just knew there was a reason why Valve likes to upload my system specs for every new download and install ;)

    I hope they release the new code before the Quads...
     
  8. mobius1aic

    mobius1aic Notebook Deity NBR Reviewer

    Reputations:
    240
    Messages:
    957
    Likes Received:
    0
    Trophy Points:
    30
    While it's nice to see Valve taking full advantage of CPUs here and in the future, I do hope they take into account that scalabillity is important. CryEngine 2.0 is supposed to be able to detect how many cores and threads are present, and scale the engine accordingly, making maximum use of the hardware at hand. I really like how Valve and Crytek push the limits, yet not forget about everyone else in the process. That's why I think they are the top two engine developers in the industry.
     
  9. Cerebral

    Cerebral Notebook Geek

    Reputations:
    4
    Messages:
    88
    Likes Received:
    0
    Trophy Points:
    15
    Heh thanks :) its amazing what you can find browsing a mates pc.

    Id really like to see quad core benchmarks. I think this step would make a great addition to gameplay...looks like i have to save for a notebook and a pc upgrade now :p

    Though i also hope that there will be proper support for the single cores that are available today, some really can still pack a punch and it would be disappointing to see them left out.
     
  10. Zellio

    Zellio The Dark Knight

    Reputations:
    446
    Messages:
    1,464
    Likes Received:
    0
    Trophy Points:
    55
    Jeez, I can run Hl2 on all of my pcs perfectly, modded even with the cinematic mod, HL2 episode 2 must be huge then...
     
  11. Pitabred

    Pitabred Linux geek con rat flail!

    Reputations:
    3,300
    Messages:
    7,115
    Likes Received:
    3
    Trophy Points:
    206
    What? They pack approximately half the punch of a dual-core CPU, and 1/4th the punch of a quad-core. Support for single cores will exist, no problem. Single cores have been running multi-threaded code and multiple applications for a while. But you aren't gonna get the same kind of performance from a single core as from a multi-core. They may have to turn off physics simulations, or other things that make the game to make it run decently on a single-core CPU.
     
  12. Dustin Sklavos

    Dustin Sklavos Notebook Deity NBR Reviewer

    Reputations:
    1,892
    Messages:
    1,595
    Likes Received:
    3
    Trophy Points:
    56
    Regardless of whatever middling feelings I have towards Half-Life 2 and Steam, I do have to say this:

    Half-Life 2 scales WELL. Surprisingly well. That game is ridiculously undemanding on hardware; I maxed out AA and AF on my desktop and it still moves like gravy at 1680x1050. Far Cry can't do that. I can't even do that on UT2K4 (which for some odd reason runs like crap).

    They coded an engine that includes freaking everyone. The only other engine I've seen that scaled as well was the Doom 3 engine (don't laugh, I've run it on an All-in-Wonder Radeon 8500DV). While you want to say Far Cry runs well and scales well, it really doesn't. It really enjoys having the surplus of rendering power, but the game looks AWFUL on medium and low settings and at low resolutions.

    I do have a lot of faith in Valve's coding team, and to address what a couple people have said:

    Gabe isn't directly responsible for getting the multithreading code done; it was handed off to some other guy whose name escapes me because I don't feel like opening the article.

    They're coding Source for longevity, which is really awesome and almost seems like it's being made to be the DirectX of 3D engines if that makes any sense. It's not being multithreaded for dual core, it's being multithreaded for n-core, where n=however many cores you want.

    These guys are, if nothing else, not just forward thinking but looking at the whole picture, and I respect that.

    This also points out one key thing to me: with the way CPU cores are scaling, kind of makes a dedicated PPU seem obsolete almost out of the gate, doesn't it? When you can just offload physics calculations to another CPU core which most people would already have at that point, what the heck do you need a PhysX for? The concept of a PPU or even a dedicated physics processor (see ATI's "third PCI x16 slot" implementation) feels superfluous and practically stillborn.
     
  13. Jalf

    Jalf Comrade Santa

    Reputations:
    2,883
    Messages:
    3,468
    Likes Received:
    0
    Trophy Points:
    105
    The PPU thing *could* work, but not in its current incarnation. The PCI bus, long latency and limited bandwidth kills it. And yes, just dedicating a CPU core to it would work almost as well.

    But with AMD's Torrenza plans, anyone can develop 3rd party CPU's and you can just plug them into your motherboard, for a high-bandwidth, low-latency coprocessor. I imagine a PPU would work really well there. But that's probably still a few years off, so yeah, for all practical purposes, I'd stick with physics being handled by CPU (eventually with GPU assistance, since both AMD and NVidia are working on solutions for this)
     
  14. Pitabred

    Pitabred Linux geek con rat flail!

    Reputations:
    3,300
    Messages:
    7,115
    Likes Received:
    3
    Trophy Points:
    206
    A dedicated PPU is the same reason you have a dedicated graphics card. A CPU can do anything, but it's not as efficient on many things. Have you seen how much the Folding@Home has sped up when they released a GPU client? Something like 20x over the FASTEST CPU's. For parallelizable problems like physics, you just can't beat special-purpose hardware.

    And Jalf: bandwidth isn't a huge deal with that... most of the actual work is calculations, and the results of them. The data is mostly flying around on the card itself, it just has to get the input (very little data) and provide the output (again, very little data). A PCI physics card will still speed many complex calculations up, leaving the CPU to deal with AI and sound processing, etc.