The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    AMD Mobile 5000 series

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by custom90gt, Jan 14, 2021.

  1. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Because benchmarks on their own are meaningless to most users. Even those users that use benchmarks in their vocabulary too.
     
  2. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,807
    Trophy Points:
    331
    It's intel's hand picked benchmarks that in the past they were faster in. They will have to go back to the drawing board to design new RUGS now. Notice how they use Kinemaster for video editing results. Something everyone uses right?
     
    etern4l likes this.
  3. MyHandsAreBurning

    MyHandsAreBurning Notebook Consultant

    Reputations:
    64
    Messages:
    124
    Likes Received:
    126
    Trophy Points:
    56
    5000 series is looking good with strong performance across the board, although the IGPU and AVX is still clearly an Intel edge (together with some niche stuff like AES encoding which may appeal to specific use-cases). AMD has apparently "enabled CPPC or Collaborative Processor Performance Control on R5KM" ( https://semiaccurate.com/2021/01/26/amd-launches-ryzen-5000-mobile-apus/) which may translate to significant power savings, a god-send for IGPU only laptops

    I have no need for the AVX and other Intel sugar, so Ryzen pulls ahead even further for me. No idea if/how Intel will be able to match it with 8 core Tiger while staying in the power envelope.
     
  4. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yeah, Intel is a company, just like AMD, and it will show its product in the best light possible. Where you need to stop, and think is do any of the actual tests they do imitate (at all) any of your own workflows? And by 'you' here, I mean most people, not the crabby person we know and love.

    And lo and behold! Actual workflows and workloads. Not 'scores' to have a pissing contest with. :rolleyes:

    From the link above (which I'm sure you'll piss all over too).

     
  5. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,807
    Trophy Points:
    331
    Someone's in a sour mood today.

    Hand picked benchmarks with time as the default measurement. Now that they won't be the top dog in those areas, my guess is that RUGS 2.0 will have other obscure programs that are real world representations of what everyone does these days...

    Anyway back on topic, another review of the 5980HS:
     
    KING19 and saturnotaku like this.
  6. saturnotaku

    saturnotaku Notebook Nobel Laureate

    Reputations:
    4,879
    Messages:
    8,926
    Likes Received:
    4,701
    Trophy Points:
    431
    Since lifestyle and game streaming are now considered Real Work™ maybe tiller can relax by doing either.
     
    tilleroftheearth likes this.
  7. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Not Real Work™, but certainly real workloads. ;)

    Yeah, those were some pretty obscure programs huh? :rolleyes: What other absurdities can be considered logical in this context here, I wonder, from the master of deflection?
     
  8. KING19

    KING19 Notebook Deity

    Reputations:
    358
    Messages:
    1,170
    Likes Received:
    778
    Trophy Points:
    131
    Wow, AMD is killing it

    and the 4800H is already beastly enough. With more manufacturers using AMD 5000 CPUs even with their high end laptops Intel has a lot of catching up to do.
     
    custom90gt likes this.
  9. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,807
    Trophy Points:
    331
    Lol master of deflection.

    I haven't deflected anything. My point still stands. Intel picks things to run benchmarks on and then says "The 11th Gen Intel® Core™ i7-1185G7 processor performed better on majority of benchmark and RUG testing and has unique features, together delivering the ultimate creation experience."

    I'm excited for the big jumps that Tiger Lake had, but also excited that the 5000 series is better than that.
     
    tilleroftheearth and saturnotaku like this.
  10. Reciever

    Reciever D! For Dragon!

    Reputations:
    1,525
    Messages:
    5,340
    Likes Received:
    4,299
    Trophy Points:
    431
    I thought cherry picking was generally frowned upon doesnt matter if team A or team B is doing the picking.

    I suppose Intel's having trouble since they arent used to having to actually market their wares.
     
    tilleroftheearth likes this.
  11. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Yeah, you have no valid point. My points hold water, even if you have veto power over anything I post.

    Quoting myself.
     
  12. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    Whenever I need to compare CPUs, my first stop is to go to notebookcheck, in this case to have a bit of a healthy laugh at Intel's 11th gen effort so far:

    https://www.notebookcheck.net/AMD-R...t-out-for-single-core-supremacy.516875.0.html

    Even the single-core hegemony is gone. All that's left is this Thunderbolt, and maybe Optimus - not sure what battery life looks like on AMD laptops with dGPUs, although the CPUs themselves are more efficient.
     
  13. MyHandsAreBurning

    MyHandsAreBurning Notebook Consultant

    Reputations:
    64
    Messages:
    124
    Likes Received:
    126
    Trophy Points:
    56
    On the topic of optimus, the experience with AMD igpu + Nvidia dgpu + Linux is mediocre due to a lack of proper drivers. You pretty much have to use prime-select to disable/reenable the dgpu for battery life, and that requires a reboot after reconfiguring
     
    etern4l likes this.
  14. saturnotaku

    saturnotaku Notebook Nobel Laureate

    Reputations:
    4,879
    Messages:
    8,926
    Likes Received:
    4,701
    Trophy Points:
    431
    Ryzen 4000 vs 10th-gen Intel favored Ryzen in terms of battery life. I don't expect Ryzen 5000 vs 11th-gen to be any difference especially since the process nodes will remain unchanged. If OEMs adopt USB 4.0 40 Gbps on AMD platforms, there goes Thunderbolt's advantage as well. The main issue is that TB3 has specific requirements that need to be met in order to receive Intel's blessing to be labeled as such. USB 4 is a "looser" (for lack of a better word) standard so it will be incumbent for reviewers and consumers to know what they're getting.
     
    etern4l likes this.
  15. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    Not a huge surprise there, even Optimus can be challenging on some distros. It's nice there is a soft MUX though. What is the battery life and graphics performance using the onboard iGPU?
     
  16. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    Yeah, USB4 looks good, it's basically TB3 - allows PCIe tunneling and 40TB/s. TB4 doesn't seem to add that much BTW, same bandwidth too. Could you be more specific why not having Intel supervision could affect USB4 in practice, i.e. make it looser? Are you worried some devices in the market won't be fully compliant, and manufacturers will be falsely claiming USB4 compatibility?

    I mean Intel doesn't certify earlier USB implementations and we are good, right? Garbage gets bad reviews and dies. No major manufacturer would try to pull any fake USB nonsense over reputational damage/litigation risk.

    Also seems that USB has some central body around it: https://www.usb.org/about
     
  17. saturnotaku

    saturnotaku Notebook Nobel Laureate

    Reputations:
    4,879
    Messages:
    8,926
    Likes Received:
    4,701
    Trophy Points:
    431
    It's more that USB 4 is going to encompass all aspects of the USB spectrum, sort of like how we got USB 3.0 to start, then USB 3.1, which was divided into Gen 1 (5 Gbps) and Gen 2 (10 Gbps). We're probably going to have USB 4.0 5 Gbps, 10 Gbps, 20 Gbps, and 40 Gbps, all using the Type-C connector. Plus, there's power and display output capability. Manufacturers are going to have to be very clear about what their products are going to offer.
     
    etern4l likes this.
  18. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    Hopefully not. Not clear to me why someone would produce a USB4 5Gbps device. Should be the case that in order to be USB4 certified by that body, all features need to be implemented - otherwise things could indeed get slightly messy. If there is a certification process, then consumers will have an easy filtering shortcut, kind of like WiFi cerification.
     
  19. MyHandsAreBurning

    MyHandsAreBurning Notebook Consultant

    Reputations:
    64
    Messages:
    124
    Likes Received:
    126
    Trophy Points:
    56
    It's idle battery consumption is fine, although it could be a bit more efficient on light tasks like youtube, where my 4800H's igpu draws about 10W.
     
    etern4l likes this.
  20. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    I thought AMD supports AVX, except the newest AVX512?

    Also confused by the iGPU comment? Don't these Ryzen APU devices have integrated Vega GPU?

    https://www.techspot.com/review/2188-amd-ryzen-5980hs/

    Are you saying Intel's iGPUs are better? In what way? Performance? Energy efficiency?
     
    Last edited: Jan 27, 2021
  21. MyHandsAreBurning

    MyHandsAreBurning Notebook Consultant

    Reputations:
    64
    Messages:
    124
    Likes Received:
    126
    Trophy Points:
    56
    Yes, I'd meant AVX512, which some of these 'benchmarks' rely on. Tiger Lake iGPU performs better than the vega all round, pretty much.
     
    etern4l likes this.
  22. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    That's not catastrophic, but hardly ideal. Unless I misunderstand TS my entire i7-8750H CPU and iGPU can idle around 2W on battery, and not much higher plugged in.
     
  23. saturnotaku

    saturnotaku Notebook Nobel Laureate

    Reputations:
    4,879
    Messages:
    8,926
    Likes Received:
    4,701
    Trophy Points:
    431
    The entire USB-C rollout in the laptop space has been messy. The connector may be the same, but the standards it supports vary wildly. I'm pretty positive the Maingear Vector 2's USB-C port is 5 Gbps, and I know it doesn't support power delivery or display output. Base models of the Razer Blade 15 have one Thunderbolt 3 and one USB-C 10 Gbps port. Both support display output but not power delivery. The Advanced versions of the same laptop have the same USB-C connections but do include power input. The addition of a 20 Gbps speed is going to make things even worse.

    Intel Iris Xe graphics generally perform better than integrated AMD Vega in games, at least once Intel puts out a driver that's optimized.
     
    etern4l likes this.
  24. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    OK, so basically AMD is more applicable in DTRs where battery life or iGPU performance are of little importance. Makes me wonder if Intel is in a position to pressure OEMs to go in the thin and light direction, where it seems to have an advantage over AMD. Could be a double-edged sword though, since in comes ARM/M1 on the other end.
     
  25. Aivxtla

    Aivxtla Notebook Evangelist

    Reputations:
    709
    Messages:
    650
    Likes Received:
    890
    Trophy Points:
    106
    Just some additional info..

    There also seems to be contention as to the usefulness of AVX-512 at the moment for consumer side chips in regards to how often it would be used vs the extra die space allocated for it. Also if only the instructions comprising it weren't also so fragmented across product lines. Intel claims its important for deep learning and having cross product line consictency.

    A former Intel Engineer seems to think it's a waste on client side/mobile and Linus Torvalds seems to hate it completely.
    Linus Torvalds (Linux Kernel Dev/Creator):
    https://www.phoronix.com/scan.php?page=news_item&px=Linus-Torvalds-On-AVX-512
    Francois Piednoel (Former Principal Engineer at Intel):
    https://www.pcworld.com/article/356...fix-it-former-principal-engineer-unloads.html
    Intel Response:
    https://www.pcworld.com/article/357...itics-who-wish-it-to-die-a-painful-death.html
     
    etern4l and saturnotaku like this.
  26. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    Good point @Aivxtla, the absence of AVX-512 is basically a non-issue. Looks like it's very difficult to leverage properly even in scientific applications, and in fact performance may fall in many cases when this is forced since AVX512 is very sensitive to data alignment, and adversely affects performance of other functional units of the CPU.
     
  27. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    All the points above are true today (well, mostly, but who listens anyway). How it will all play out in the future is anyone's ball game.

    Right now, Intel has the home-court advantage and that 800lbs elephant no one here likes to talk about. Compatibility, reliability, consistency, and dependability. Over and above the fact that they're still an option for any workflow short of a 'moar cores' one.

    Everyone has their opinions, but Intel is still firing on all cylinders as a company. Hard to admit for some, but true.

    Here is the other side of 'poor intel' right after being (supposedly) 'leapfrogged' by AMD's recent offerings.

    See:
    AMD Ryzen 9 5980HS Review (techspot.com)


    See:
    AMD Ryzen 9 5980HS Cezanne Review: Ryzen 5000 Mobile Tested (anandtech.com)

    Yeah, 70C for 'Silent' and 65C for 'Performance'. :rolleyes:

    And those AVX tests are ~3x in favor of Intel.


    See:
    AMD Ryzen 9 5980HS and Ryzen 9 5900HS vs Intel Core i7-11370H comparative benchmarks: Tiger Lake-H35 and Cezanne 35 W battle it out for single-core supremacy - NotebookCheck.net News


    See:
    Ryzen 5000 review: AMD wins big in laptops | PCWorld

    Yeah, RAM matters. :)



    Yeah, have a read of those quotes above. They paint a different, more balanced picture, than what is repeated here, ad nausea.
     
  28. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
  29. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    3D Particle Movement v2.1 is meaningless to me as a 'workload' too.

    But these kinds of 'score' is meant to show how finely tuned one platform is over the other. In an extremely specific metric.

    Here, the latest Ryzen 5000 offerings are a poor downgrade from what was offered mere months ago from the same company.

    Laughable when compared to the equal TDP (15W) i7-1185G7 or 1-year-old i7-1065G7, with less than 22% of the performance.

    Think about that. Intel is 4.55 times faster. Their well isn't dry yet.

    And while all their bets don't pan out, I'm sure AVX will come into its own in a few more iterations too (for many/most workloads).

    Why doesn't Cinebench twinkle in my eyes? No real-world use-case for my workloads. Nor most people's either.

    Not that people don't render a video now and again (more than me, anyway). But even when they do, they wouldn't do it with just a CPU either. :eek:


    [​IMG]
     
  30. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,807
    Trophy Points:
    331
    Lol so particle movement is real world but cinebench isn't. OK sure thing lol.
     
    etern4l and saturnotaku like this.
  31. win32asmguy

    win32asmguy Moderator Moderator

    Reputations:
    1,012
    Messages:
    2,844
    Likes Received:
    1,699
    Trophy Points:
    181
    You might try a 5.11 release candidate to see if it helps. They enabled support for multi-plane overlays on Renoir which should help video playback power consumption.
     
  32. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Reading comprehension is a good thing to strive for, really. Or at least produce a better comment or argument.

    But no worries, you've made up your mind a long time ago what (or more accurately, who) is not allowed to be better, or even have the possibility of being correct, ever.

    Don't worry, I'll let you have the last word. You're a hopeless conversationalist. No matter what and how the facts you ask for are ever presented.


     
  33. MyHandsAreBurning

    MyHandsAreBurning Notebook Consultant

    Reputations:
    64
    Messages:
    124
    Likes Received:
    126
    Trophy Points:
    56
    I'll keep that in mind and move to 5.11 (from 5.8) if nothing else breaks. 5.4 to 5.8 was a pretty big improvement in terms of hardware support, so hopefully another kernel update will bring more of the same
     
  34. win32asmguy

    win32asmguy Moderator Moderator

    Reputations:
    1,012
    Messages:
    2,844
    Likes Received:
    1,699
    Trophy Points:
    181
    Yep. I will try it myself too at some point.

    This is one of those things where I do get excited about the new processors and the IPC gains, but the fact is probably for most of 2021 Renoir will be a more mature platform for Linux use. If you want something that just works with Linux it's usually best to not get cutting edge hardware. This goes for intel tech too.
     
    etern4l likes this.
  35. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    OK, so by carefully optimising code, its possible to achieve up to 3x improvements in AVX workloads. I think that's borderline plausible (could be something else going on here), Intel aren't idiots and designed AVX512 for a reason. My two questions on this are:

    1. How easy is to achieve these gains in practice, even if AVX512 is used?
    * If you don't see why, trust Linus T. and me lol: using it properly is not trivial and there are some tradeoffs that could cancel out improvements (e.g. very coarse memory alignment could lead to higher memory utilisation, worse cache efficiency etc.). It's definitely not something that's going to happen automatically just plugging in an AVX512 CPU. You need software specifically optimised for AVX512.
    See here for example:
    https://community.intel.com/t5/Inte...-than-AVX2-What-I-am-doing-wrong/td-p/1170678

    On top of that AVX512 can cause overheating, downclocking, and disables another CPU unit which would slow down non-AVX code.One more example later on.

    * The reason I brought up Cinebench is not because I assumed most of us are 3d modelers, but because the benchmark uses AVX512 - and in this case Intel is still left in the dust

    2. How would this particular benchmark perform using GPU?
    * You don't hear anyone talking about doing deep learning or mining on AVX512, haven't heard game devs clamoring for AVX512 (although haven't really been listening)

    I tried googling out why AVX512 is awesome according to real users not marketing, and didn't get much, this was probably the best hit:
    https://news.ycombinator.com/item?id=15987796

    "I changed some Golang code to AVX in my last project. In isolation that code ran like 2-4x faster but as part of the full program, the program was 5% slower overall. Could never make a sense of it. Any thoughts on how to determine the cause?

    mscrivo on Dec 22, 2017 [–]

    AVX code is known to make the CPU run way hotter than usual. Perhaps that caused throttling that made general code running at the same time, or within a short span thereafter, perform worse?"


    So another question is that even if we accept that AVX512 has potential for improving performance in some cases (SIMD workloads, mainly linear algebra, not sure about encryption in presence of AES acceleration), is this not a bit irrelevant in presence of GPU compute capabilities we have today?

    Basically, Intel is trying to beef up its main product by cramming a lot of additional functionality into the CPU, but the problem is that while you are using these functions, the rest of the CPU functions are affected and put on hold, and even worse - other cores can also be affected (downclocking, overheating etc), whereby everyone else is using the coprocessor model: we have GPUs, Apple has some neutral co-processors in the M1 etc. and in this case both the CPU and the coprocessors can work at full speed, although there is some overhead due to data transfers. Which model prevails is an interesting question, but for now looks like the coprocessor model has the upper hand, even those the old AVX2 is useful and gets used in many applications including games. It's not clear that further increasing the width of AVX is the way to go.
     
    Last edited: Jan 28, 2021
    tilleroftheearth likes this.
  36. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,807
    Trophy Points:
    331
    Ah so you've succumbed to throwing insults around. Sorry I was holding my daughter and had to one handed type, but that doesn't change the point of my post.

    My point is that you enjoy picking one thing to glorify while shutting down another because you say it's not a workload that most users use. I disagree with you. For some reason you are allowed to have the opinion that one bench is meaningful and another is not, but no one else is.

    Yes you can tune a specific workload (especially AVX512) on an Intel system to be faster than AMD who doesn't support that instruction set. I have yet to see any consumer application that uses AVX512, have you? I've been following it since my x299 days (June 2017) but haven't found a use for it.

    I'm not going through and randomly picking tests that are of no value to me and posting about them, but let me try: In NAMD ApoA1 Sim, AMD is 1.94x better. In DigiCortex 1.35, AMD is 4.22x faster even at 15w. At blender (something I consider legitimate), AMD took half the time (yes in seconds!!). Ok I'm bored now and there would be more tests to post about, but you see how pointless that was? Well maybe not Blender...

    Obviously I don't think Intel is a terrible chip manufacturer, perhaps you haven't seen my signature. I enjoy my laptops, even my old thinkpad (and the hundreds of others that I've bought, benched, and sold). The little XPS 13 2-in-1 is great even with just the i5 version.

    Right now AMD has a lot to offer in the mobile market. Their single core IPC is better than Intel. More cores is beneficial in many tasks (including gaming which is really limited if you have <6 cores on newer AAA titles). Their new L3 design allows their CPUs to run photoshop/office stuff that is obviously so important to your point faster than Intel's 28W counterpart.

    We will see what Intel can do with an extra few watts and their H series, it's likely what I will pick up unless someone creates a serious competitor to the XPS 15.
     
    tilleroftheearth likes this.
  37. saturnotaku

    saturnotaku Notebook Nobel Laureate

    Reputations:
    4,879
    Messages:
    8,926
    Likes Received:
    4,701
    Trophy Points:
    431
    Yeah, but how fast can it launch Microsoft Office, huh? Checkmate, losers.

     
    etern4l likes this.
  38. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    Hope Asus takes the 8 lane eGPU idea further by implementing it in larger laptops, and providing a full size enclosure capable of taking up to 3090. If that happens, I am in.
     
  39. saturnotaku

    saturnotaku Notebook Nobel Laureate

    Reputations:
    4,879
    Messages:
    8,926
    Likes Received:
    4,701
    Trophy Points:
    431
    As I've stated previously here and elsewhere, that would entirely defeat the purpose of what Asus is going for with this product - an ultrabook you can take to the office then play AAA games with while on the road.
     
  40. win32asmguy

    win32asmguy Moderator Moderator

    Reputations:
    1,012
    Messages:
    2,844
    Likes Received:
    1,699
    Trophy Points:
    181
    I am curious if the video outputs on the laptop are connected to the Vega IGP or GTX 1650. It could make a decent Linux ultrabook if the GTX 1650 can be completely disabled and still be able to drive an external display or two via the Vega IGP.

    Too bad they could not fit a 2280 M.2 drive. I guess we are still pretty far off from 2TB 2230 M.2 drives, and there will probably always be a performance difference between the form factors as well.

    I also like the KitGuru review. It is amazing just how much the eGPU beats the Razer Blade Stealth 13 + Razer Core X / 3080 Desktop, even with using the internal monitor. I am guessing its even better than the AGA with a 3080. It looks like this Asus eGPU can also hot plug too, so it does not need a reboot like the AGA did.
     
    Last edited: Feb 11, 2021
    Papusan, etern4l and saturnotaku like this.
  41. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    Noted, which is why I am not contesting the eGPU product, but suggesting the core technology solution (the proprietary eGPU connector) could be used in other products....
    That said, I don't buy this direction personally, and would rather have a proper eGPU. They could offer two eGPUs compatible with the connector to broaden the appeal of the product.
     
  42. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    The pro of the Asus eGPU is the 8 lane connector, twice as fast as the one in the AGA. The con is the mobile 150+15W. If someone managed to get a full desktop 3080 to work in the AGA, my money would still be on that in most applications (unless internal display used), given that the fastest mobile 3080 is still way behind the desktop variant.
     
    Last edited: Feb 11, 2021
  43. win32asmguy

    win32asmguy Moderator Moderator

    Reputations:
    1,012
    Messages:
    2,844
    Likes Received:
    1,699
    Trophy Points:
    181
    Maybe they will eventually. For now they seem to be focusing their engineering on a 3070 Laptop eGPU and also upgrading the 1650 MQ to a 3050 MQ. Maybe if there is another AMA with Sascha Krohn @ Asus then the question can be asked to see if its on the radar?
     
    etern4l likes this.
  44. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    This is too depressing lol
     
  45. win32asmguy

    win32asmguy Moderator Moderator

    Reputations:
    1,012
    Messages:
    2,844
    Likes Received:
    1,699
    Trophy Points:
    181
    Oh, maybe for a ready to go solution, but you could still always take any AMD Ryzen 4000 / 5000 laptop with two M.2 slots and use the second slot as a custom eGPU. Such a config with the Dell G5 SE performs even better than the ROG XG mobile with 3080 Mobile, probably cheaper too if you can find a 3080 Desktop card that isn't marked up:

    https://egpu.io/forums/builds/2020-...k8ch-rtx-3080-32gbps-m2-adt-link-r43sg-win10/

    It is arguably better too since you get a better dGPU for mobility, bigger battery, non-soldered ram, full M.2 2280 for better SSD performance and capacity, and a larger screen that has better specs than the limited options for 13 inch panels.

    It is crazy sometimes just how out of touch the entire industry is with what could be a good eGPU setup. I am guessing the Alienware m15 R1 + AGA was the last one that came close without the majority of downsides. I tried TB3 and the latency overhead was aweful even on an external display. Atleast with AGA / M.2 solutions they are closer to what you would get from a directly connected card.
     
    Aivxtla and etern4l like this.
  46. etern4l

    etern4l Notebook Virtuoso

    Reputations:
    2,931
    Messages:
    3,533
    Likes Received:
    3,492
    Trophy Points:
    331
    That's a pretty hardcore hack, respect!

    In your comments about TB3, were you referring to the input lag you experienced? That's weird, unless the game was doing a lot of texture streaming. Shouldn't be happening with the latest Ampere cards with their 10 or 24GB of VRAM.
     
  47. win32asmguy

    win32asmguy Moderator Moderator

    Reputations:
    1,012
    Messages:
    2,844
    Likes Received:
    1,699
    Trophy Points:
    181
    Yes it was input lag. The TB3 on the laptop was connected via the PCH instead of the CPU.
     
  48. Ed. Yang

    Ed. Yang Notebook Deity

    Reputations:
    86
    Messages:
    751
    Likes Received:
    199
    Trophy Points:
    56
    The first few IdeaPads with 5000u series CPU is now available in both Singapore and Malaysia market
    A comparison made to compare with INTEL IceLake and TigerLake makes with close specifications shows that...

    Screenshot (94).png
    the AMD CPU price in possibly on the rise that even if i can configure the screen type
    to lower resolution make to match what TigerLake variant can offer to save, the price
    difference would not be that much...​
     
    etern4l likes this.
  49. saturnotaku

    saturnotaku Notebook Nobel Laureate

    Reputations:
    4,879
    Messages:
    8,926
    Likes Received:
    4,701
    Trophy Points:
    431
    Generally impressive results for the Ryzen 7 5800U. Only real problem is AMD's confusing name scheme where older Zen 2 models will still have 5000-series branding.

     
  50. custom90gt

    custom90gt Doc Mod Super Moderator

    Reputations:
    7,907
    Messages:
    3,862
    Likes Received:
    4,807
    Trophy Points:
    331
    Yeah, so lame they did that when their whole reasoning for going with the 5k series name was to clarify which generation of CPU it is vs the mobile variant...
     
    Papusan and saturnotaku like this.
← Previous pageNext page →