The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous page

    Gtx 580m?

    Discussion in 'Gaming (Software and Graphics Cards)' started by Maxiiboii, Nov 15, 2010.

  1. aintz

    aintz Notebook Evangelist

    Reputations:
    34
    Messages:
    588
    Likes Received:
    3
    Trophy Points:
    31
    gtx460 like performance! finally catching up to lowmid end desktop performance. prices are still very steep though
     
  2. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,552
    Messages:
    6,410
    Likes Received:
    4,087
    Trophy Points:
    431
    So GTX580m is a speed bump from the 485m, and the new real chip is the GTX570m.

    Depending on price, GTX570m might be the best chip, nvidia has as price/performance, but we will see about that. It looks like it lost more than 10% of the cuda processors, along with other things, so that will not give it the performance it needs to catch up to the HD6970m.

    Let's see if AMD releases the rumored HD6990m and HD6950, to have a complete set of options from both vendors in the high end arena.
     
  3. redghey

    redghey Newbie

    Reputations:
    0
    Messages:
    3
    Likes Received:
    0
    Trophy Points:
    5
    Do you think it will be possible for us to unlock the rest of the shaders and components of the 570m?
    Also, will the 570m be available for other manufactures like Sager/Clevo because atm from what I read they are going to just ship it with the MSI 780R?
    I'm just not sure to choose between the 485m vs the 570m since optimus seems to be all the hype especially with saving battery life...

    (Sorry for asking same question on another thread cause I really want to know :p)
     
  4. Abula

    Abula Puro Chapin

    Reputations:
    1,115
    Messages:
    3,252
    Likes Received:
    13
    Trophy Points:
    106
    Notebookcheck NVIDIA GeForce GTX 570M

    Anandtech Optimus Top to Bottom: NVIDIA Releases the GeForce GTX 570M and 580M

     
  5. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,552
    Messages:
    6,410
    Likes Received:
    4,087
    Trophy Points:
    431

    By new chip, I meant new configuration, not just a speed bump like 560m and 580m. It has a different configuration of shaders, texture units etc etc.

    Considering that the 485m was between 5 and 10% faster than HD6970m, now take the 485m, strip 12.5% of the CUDA cores, less ROPs, less texture units, less bandwidth, and you expect such a heavy impact in cut down to be as fast as HD6970m? Sure it will be in a similar realm, but I think the HD6970m will be the faster card all around.
     
  6. redghey

    redghey Newbie

    Reputations:
    0
    Messages:
    3
    Likes Received:
    0
    Trophy Points:
    5
    Thanks for clearing things up with me fellas.
    So final question should I just go for the 485m or wait for the 570m?
    I'm on a tight budget so not even going for the 580m.
     
  7. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Yeah I`m with you that 570M won`t beat 6970M, but I don`t think there will be any huge difference between them. The core speed is indeed 105MHz faster with 6970M, but the shader speed is 390MHz faster with 570M, the memory itself is 250MHz faster with 570M.

    I personally think it is very nice that Nvidia release a GPU that is around 80W, and serve as a middle ground between 560M and the beast 580M. 20W TDP more to the 485M/580M/6970M level is a lot. :)
     
  8. alxlbf2

    alxlbf2 Notebook Consultant

    Reputations:
    13
    Messages:
    230
    Likes Received:
    2
    Trophy Points:
    31
    I never compare GPU with comparing clocks. Its not really sencefull because they use diffrent architectures, memory bandwithe and numbers of cores/ pipelines and have diffrent drivers etc. etc. .
    I think ATi will be the better choice because we know that the prices cant get really higher. nVidia has ( my perspective) in the high end segment unaffordable prices. And as far as i know every game behaves on every GPU in a other way. In some games nVidia is better with the same gfx card and in another game the ATi card.
     
  9. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Yes I agree that they are apples and oranges. And what is "worth it" is very subjective. I personally like automatic switching like Optimus, good driver support, PhysX in some games. Yes they are more expensive, but for some people money is not an issue :)
     
  10. KernalPanic

    KernalPanic White Knight

    Reputations:
    2,125
    Messages:
    1,934
    Likes Received:
    130
    Trophy Points:
    81
    Well the important part is, it may be entirely possible that Nvidia may actually ship an affordable GPU which creeps into low-enthusiast desktop (GTX 460 768) level performance. The new 570m is the real story IMHO.

    I also believe the 570m won't really outperform the 6970m, but it will be "close enough" to make it a wash depending on price. (after all the 6970m doesn't really outperform the 485m, but its "close enough" to be considered roughly equivalent.)
     
  11. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    If it's 25 to 30 percent faster than the 470M, it's definitely about as fast as the 6970M.
     
  12. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Kevin you seem to know a bit about GPUs. The GTX 570M is according to Notebookcheck around 75W. What I don`t get is that it is the exact same as 560M, but have 336 shaders instead of 192. Shouldn`t it use more power? I see the core and memory is clocked higher in the 560M though. Is that the reason? And since they draw the same power, shouldn`t they perform the same?

    One final question: How do you think the 570M is compared to 560M heatwise. Will it get hotter? Thanks

    Here are the links:
    570M
    560M
     
  13. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,552
    Messages:
    6,410
    Likes Received:
    4,087
    Trophy Points:
    431
    Just going by the obvious. If it has been cut down over 10% from the 485m, its best hope is to match the performance of Hd6970m. It's simple, really. It has the exact same clocks as 485m too.

    My guess is that there will be a similar gap like 485m and HD6970, something along 5-10%.

    First of all, TDP is not a direct indicative of performance. HD6970 was rated between 75 and 100w for thermal designed power. That doesn't mean it performs the same as other 75w or 100w GPUs. 570m does use more power than 560m, but thanks to their design decisions, most probably need the same amount of cooling.

    TDP is not how much power they draw, its the designed thermal power envelope. Altho in some cases GPUs do consume the same as their TDP, they can often consume much more, or even less. Power consumption is not an indicative of performance, otherwise the AMD processors with higher TDP would be outperforming Intel CPUs.

    570m will consume more power than 560m. It won't necesarily require that much more cooling, hence the similar TDP.
     
  14. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Why are you lecturing me in TDP? I know the difference between heat dissipation and power consumption. That is why I said 75W instead of TDP of 75W. Notebookcheck says power consumption of 75W for 570M and the same for 560M.

    It just doesn`t make sense to me what you said that "thanks to their design decisions" they need the same amount of cooling. They are both using the exact same core model with the same power efficiency, hence why they are both from the 500 series.

    I could understand that power consumption between a Nvidia card and an AMD card does not match the performance difference between them because they are two totally different architectures. But when we are talking about the exact same core models from the same series and from the same architecture from the same company, I`d expect them to perform equal in terms of performance/Watt and heat/watt. But that isn`t obviously the case here since 570M is faster. BAH :confused:
     
  15. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    The 570M consuming just 75W is doubtful. The 470M consumed that, and you've added 15% more shaders and a 256-bit bus. If it's a 10% cut down 580M, it should be consuming something like 80 to 85W.
     
  16. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,552
    Messages:
    6,410
    Likes Received:
    4,087
    Trophy Points:
    431
    I wasn't trying to lecture you, its because most people treat TDP as the power consumption. You can design boards with the same cooling but using different powers because of several factors including GPU board size, materials etc etc.

    Sorry if I came sounding that way.

    Because notebook check uses power consumption as TDP including the MXM board and memory. The only way to know for sure is to measure same laptops with different GPUs and measure when on load and on idle. 570m might be more efficient with higher cuda core count but lower voltage, resulting in less power consumption than expected. Those measures used in notebook check are always estimates with very broad +/-

    And as you said, higher clocks might require higher voltage that ends up rasing the power cosumption near another GPU. Just as I use my overclocked HD5870 with higher voltage and end up consuming way more than I should (kills my 120w psu) but I am nowhere near the performance of the HD6970 even if underclocked/undervolted.

    Hell, I remember when they were trying to find out the powerconsumption of the 480m, and it turned out that SLI consumed way more power than expected, more than the supposed x2 of the GPU or something like that.

    Just wait till they measure laptops with both gpus, and you will see the 570m consuming a bit more power. Not that much because it should still be more efficient than the top of the line so maybe they are withing 10W of each other or even less.
     
  17. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    It`s allright ryzeki. Good post :)

    Yeah they must be wrong about the power consumption quote from notebookcheck Jack. I also believe that it is a bit more W. That explains everything I thought was weird
     
  18. bar-code

    bar-code Notebook Evangelist

    Reputations:
    394
    Messages:
    337
    Likes Received:
    1
    Trophy Points:
    31
    edited n/a
     
← Previous page