It would be crazy of Apple to use Tonga (the rumor) instead of GM204. If Apple really did choose AMD, Nvidia must not have been willing to budge on price.
Actually they wouldn't be crazy. There are at least 2 reasons why this rumour could be true:
1) Nvidia is filing a lot of patent infringement lawsuits agains Samsung, Qualcomm and even Apple. Rumour has it as a retaliation, Apple will move some of its products to AMD until NV either withdraws the lawsuit or decides to settle at more agreeable terms.
2) NV's GM204 is not cheap and it certainly won't be cheap for the mobile sector. When you are faced with trying to price the iMac 5K without a stratospheric price, you might want to go with a cheaper GPU because that new 5K display will cost an arm and a leg. In other words, since Macs are hardly used for gaming to begin with, you simply balance your priorities to hit appropriate price points your customers will pay based on historical data. It's possible that the inclusion of GM204 would force a new higher end SKU of the iMac when combined with a 5K display.
3) Manufacturing supply. As you can see from the availability of desktop GM204 chips, there are supply issues. Apple might has required NV to provide a certain amount of GM204 chips and Nvidia couldn't guarantee that many in xx time frame.
Now, all of these are rumours and Apple could still use GM204, or GM204 + AMD GPU + Intel GPU in various SKUs of the iMac. However, there are 3 legitimate reasons why Apple would not use Nvidia's GM204 in the next gen iMac.
4) As shown in the chart on page 2, the max resolution of the Maxwell GPU is 4K. If Apple is going to field a 5K display in the 27" iMac, they will need a GPU that supports this resolution, likely requiring displayport 1.3 (or some draft version of DP 1.3 that was available when the chips and panel were being designed).
The patent deal has nothing to do with apple because apple does not make gpu's.
NV's 204 has the best perf per watt by a huge margin. We all know that having the best perf per watt is going to cost more than perf per dollar.
With that said, apple will probably go with AMD for this years models because apple always flip flop's gpu vendors between generations.
However, it will run hotter, use more power. And historicly AMD/ATI GPU's within apple products have seen a much higher rate of recall and defects in the past.
I'm not sure how a mobile amd chip will perform at 5K. The desktop gpu's seem up to the task but we haven't seen a really good mobile amd discrete gpu in a long time.
nVidia has filed lawsuits against Samsung and Qualcomm. NOT Apple. They may have plans to sue Apple --- anyone may have plans to do anything --- but right now they have not sued Apple, have not threatened to sue Apple, have not even hinted or suggested or implied that they want to sue Apple.
Manufacturing supply (if this rumor is true) strikes me as far more likely. There is a long history of Apple doing things that seemed (especially to haters) as perverse limitations based on maximally trying to screw their customers over, only for us to learn later that they were limited by supply issues. When you're shipping Apple volumes, you CAN'T simply wish that millions of the last doodab were available when they can only be manufactured in the hundreds of thousands.
We saw this with fingerprint sensors (restricted to iPhone 5S, not 5C, touch or iPads), probably with sapphire on iPhone 6's, and probably with low power RAM on iPhone 6's (that being what's restricting them to 1GB, not some nefarious Apple plan).
Of course for this rumor to be true and the explanation to be relevant requires that AMD can manufacture faster than nV (or has acquired a large inventory). It's not clear to me that this has to be true...
It's not surprising actually, AMD is probably throwing these chips at Apple for next to nothing, and Apple probably feels at least some obligation to prop up OpenCL, the standard they championed way back when, over using the proprietary CUDA, by using AMD chips that clearly aren't as good as Nvidia would be for their needs (performance and low TDP).
Apple's hubris probably leads them to believe they can live without Nvidia and weather the growing pains. My personal experience here at work is that users who rely on Adobe CS have simply shifted to Windows-based workstations with Quadro parts rather than deal with the Mac Pro 2013 GPU/driver issues.
What will really be interesting to see is what Apple does with their MBP, where Kepler was the obvious choice due to efficiency. Maxwell would really shine there, but will Apple be willing to take a big step backwards in performance there just to stay consistent with their AMD trend?
That's certainly possible, but bear in mind that Apple generally doesn't have current top-end mobile hardware in their iMAcs - That either means a late arrival for iMacs or perhaps a 960M when it releases.
I doubt AMD will ever get back into Apple's good graces as none of their GPU's have met the power/performance levels nVidia has - bear in mind that price is usually a secondary concern for Apple and their consumers, so those are the only two metrics that really matter here.
Yeah, not to mention that many people were disappointed when Apple included AMD graphics in their Mac Pro. I *need* CUDA. I can't live without it, and many other professionals will tell you the same thing.
This is exactly why open standards like OpenCL should be embraced. Saying things like I can't live without "insert a proprietary/locked GPU feature" is what segregates he industry. A lot of programs benefit from OpenCL and AMD GPUs provide full GPU hardware acceleration for the Adobe Suite (Creative, Mercury, Premiere, etc.) and so on:
Also, the iMac is not a workstation which means the primary target market of iMacs doesn't perform professional work with CUDA. If you are really in need of professional graphics, you are using FirePro or Quadro in a desktop workstation or getting MacPro.
Except most professionals don't want to be part of an ongoing beta project, they want things to just work. Have you followed the Adobe CS/Premiere developments and OpenCL support fiasco with the Mac Pros, and how much of a PITA they have been for end-users? People in the real world, especially in these pro industries that are heavily iterative, time sensitive, and time intensive cannot afford to lose days, weeks or months waiting for Apple, AMD and Adobe to sort out their problems.
This. As cool as open standards are, it's also important to not stuck. The industry has shown that it will embrace open when it works. Majority of web servers use open source, and Linux has effectively displaced UNIX out of all sectors except for extreme big-iron.
But given a choice between open and "actually working", people will choose "working" every time. IE6 became the standard for so long because all of the "open" and "standards-compliant" browsers sucked for a very long time.
I have to work with both CUDA and openCL (for amd GPU) for compute workloads. The main advantage of CUDA is thier toolset - AMD compiler is ages behind and does not allow developer sufficient control of the code being generated. It's more of "randomizing" compiler, not optimizing... I would never even think about using openCL for GPU compute if I will start a new project now.
The problem is that what you are stating is only half the story. Unfortunately, each company does have a superior solution. Going with OpenGL is a compromise at best because the coding does not maximize or optimize automatically for hardware specific architectures. Maybe in a perfect world we would have an open non-proprietary standard across all programming schemes, but it's just not possible. Competition and more importantly profit is the key to making money and neither AMD or Nvidia will budge. Both parties are just as guilty as the other in this aspect.
Apple will *never* embrace CUDA. OpenCL is an important part of the future vision and strategy of Apple, whatever Nvidia is pushing, Apple is not buying.
If all Apple cares about was performance/watt, the MacPro would not feature AMD's Tahiti cores. There is a paragraph even dedicated to explaining the importance of OpenCL for Apple:
"GPU computing with OpenCL. OpenCL lets you tap into the parallel computing power of modern GPUs and multicore CPUs to accelerate compute-intensive tasks in your Mac apps. Use OpenCL to incorporate advanced numerical and data analytics features, perform cutting-edge image and media processing, and deliver accurate physics simulations." https://www.apple.com/mac-pro/performance/
Apple is known to switch between NV and AMD. Stating that AMD is not in good graces with Apple is ridiculous considering the MacPro has the less power efficient Tahiti vs. GK104s. And that is for a reason -- because HD7990 beats 690 at 4K, and destroys it in compute tasks -- which is proof performance/watt is no the only factor Apple looks at for their GPU selection.
I didn't mean to imply that it was *literally* the only factors taken into account; they clearly wouldn't use a GPU that cost $3,000 if a competing one with similar (but worse) performance/watt was $300.
I was trying to emphasize that, all factors being equal - ie, standards compliance, compatibility, supply, etc, then performance/watt is the prime metric used to determine hardware choices. The tahiti vs GK104 comparison is a great one - AMD has extremely heavily pushed OpenCL and their support for it was basically unanimous - nVidia was slow on the uptake of OpenCL support as they were pushing for CUDA.
I may be wrong, but it seems like the only reason the mobile chips are catching up to the desktop is that they haven't really improved PC cards in 5+ years. Instead of pushing the limits on the PC, building architectures that are based on pure performance and not efficiency, and scaling it down they are doing the opposite, thus the performance difference is getting smaller. It is strange that they are marketing this as a good thing being that there is a rather large difference in power and cooling availability on a tower, thus there should be a large performance gap.
Not at all. Haiwaii shows this if anything, the 290X is balls to the walls in OpenCL, while Nvidia's cards are more conservative and gaming optimised they still pack an as good and usually better punch in frame rates.
Cards are getting too hot at 2-300W, you need silly cooling solutions which are either expensive, make your card larger or louder.
The Maxwell series is phenomenal; it drastically improves frame rates while halving the power consumption of the same series chips from 2 years ago.
GPUs have come on SO far since 2009 when you are touting they've barely improved. Let's say you pit a 5870 against a 290X. The 7970 is about twice as powerful as the 5870 (slightly less in places), a 2012 GPU and the current 290X is about 30% better than a 7970. So you're effectively seeing there a theoretical 130% improvement in performance over 4 years (I say this because the 290X is now a year old), so that's an average of 30%ish improvement per year.
Considering the massive R&D costs and costs associated with moving to smaller dies to fit more transistors on a chip (which increases heat, hence why Nvidia's Maxwell is a great leap since they can now jam way more transistors on there for the GK110 replacement) GPUs have come on leaps and bounds.
The only reason it might look like they haven't is because instead of jumping from a standard let's say 1680x1050 to 1920x1080, we jumped to 3840x2160 a FOUR TIMES increase in resolution.
Mobile GPUs are even more impressive in progress really. That chart showing the performance closing between mobile and desktop GPUs isn't too untrue.
I don't know much about the AMD stuff you are talking about, but I have probably more anecdotal evidence. Software and more importantly games for PCs have been pretty stagnant as far as resources go, I used a GTX 260 card for about 5 years and never had problems running anything until recently. Seeing as Games are the largest driver of innovation most games are built for consoles, with a large percentage also being built on mobile devices. Ive been playing games that look pretty much the same at 1080p for 5+ years on my pc, the only thing that has been added is more graphical features. Further support of my argument is processors, I remember the jump to Nehalem from the Core 2 was astounding, but from then on ( still running my i7-920 from 2008 ) its been lower power consumption and more cores with small changes in architecture. So you might throw some percentages around but I just don't see it.
Well, I think you meant stagnation in 1080p gaming requirement, not stagnation in GPU performance. PC cards have indeed reached the required performance of 1080p gaming 5 years ago. Since then, focus have been shifted on supporting higher resolutions (1440p/4k) or higher frame rates (100+Hz), which haven't been possible 5 years ago. Right now, even a low end card can easily best your GTX260 card without problem.
I have an I7-920@3.2Ghz and a 2600k@4.5Ghz both rock solid. The single generation is just as great as the Core 2 to Nehalem , and the power consumption and noise are half on the sandy bridge and even less at idol. The 4770k@stock Ghz is faster than the 2600k@4.5Ghz, so I disagree, but clearly you have LOW standards cause i ditched the GTX 260 for a 5870, which was a massive improvment
Moving from 5870 (score of 30.1) vs. 980 (score of 105.1) is an increase of 3.49X. Therefore, GPU performance increased ~ 3.5X from the time HD5870 launched in Sept 2009 if one were to get a GTX980 today or an average increase of ~36.7% per year from 5870's launch.
His number indicates that the 7970 is 2x faster than the 5870 and the 290X is 30% faster than the 7970. You do the math, and you get that the 290X is 2.6x faster than the 5870.
Some good points. However, the only reason 980M is much closer to the 980 is because 980 is a mid-range Maxwell, GM200/210 will be high-end. The comparison NV uses is completely flawed because 880M is a mid-range Kepler (GK104) so of course it will be 50-60% as fast as the flagship of that generation - 780Ti. The same is true for 480 vs. 480M and 580 vs. 580M because 480 and 580 were flagship GPUs.
Let's see what happens when GM200/210 come out and compare it to 1080M GM204 and the performance will grow again in favour of the desktop flagship GPU. NV is just playing marketing right now comparing a high-end mobile Maxwell card against a mid-range Maxwell desktop card by calling 980 a "flagship" desktop Maxwell chip -- which it isn't, even remotely.
It depends on how you define "not improved". My AMD 5570 just died a couple of days ago and I ordered a GTX750. Looking at just raw numbers, it is between 150-250% faster than my 5570. I'll grant it is maybe closer to a 5670 in terms of locating it on the performance tier...but even against a 5670 it is at a minimum >100% faster in everything and in most things still in the area of >150% faster. That is in basically 4 years, for the same price and very slightly higher to somewhat lower power consumption, you get more than doubled the performance.
Sure, I wouldn't mind more, but I also won't complain either.
The performance delta between the 980m and 970m is disappointing. Since notebook parts typically don't overclock as well, a smaller delta of ~20% would have been sufficiently enough to keep the 980m well in front of the 970m. As it sits now, 30% is really, really large.
GTX970M is what the rumor mill says a full GM206 will be. So it's likely nVidia uses heavily crippled GM204 chips for the first batches and later switches to the more economical smaller chip.
More disappointing to me is 980M's absence in thin and light form-factor laptops such as MSI GS60 and GS70 models. All the other "laptops" in the list with 980 are basically briefcases/bricks with a screen attached to them. It seems if you want a light and portable 15 or 17 inch laptop that doesn't weight as much as a small printer, you have to get the 970M or 970M SLI in the Aorus X7.
Hm, what resolution are those NVIDIA numbers from? I guess playing modern games smoothly on a retina MBP—if it ships with these GPUs—is still out of the question?
I haven’t checked recently, but do games offer no AA modes for such high resolution screens (and does it make a difference performance-wise)?
nVidia is claiming beyond 1080p; and with the desktop 970 (slightly faster than the 980M) generally able to do 1440p with 4xAA while still being playable; you might be able to play in native resolution with AA disabled.
On the PC side, AA is almost always a user configurable setting. I'd assume they'd keep the option on mac ports, but don't have one to check.
Sorry, I apparently missed including the "1080p" part before the NVIDIA figures. Yeah, I know -- they're claiming "beyond 1080p" but testing at 1080p. They also target 30+ FPS as "playable", so many of the games coming out now will still be able to run at >30 FPS and 3K+ resolutions, though we might need to disable anti-aliasing in some cases to get there.
It’s a setting, but I haven’t seen "no AA" for ages—which kinda sucks if you’re on a retina screen and don’t need it as much in the first place. But the 1440p info is appreciated, that’s the kind of numbers I was wondering about. Thanks :)
I really don't think apple would opt for current AMD gpu's, even considering their great openGL performance. Main reason is crappy efficiency compared to maxwell which i think Apple cares big deal about.
On another note, i'm looking at those gtx 970m specs and it looks like this is what upcoming desktop gtx 960 could look like, with ramped up clocks of course.
Interesting that Nvidia decided to go with a harvested die at near-full clockspeeds instead of a chip with all functional units intact but with reduced clockspeeds. I guess this does allow them a clear upgrade path for future product lines while retaining a good chunk of GM204's performance.
Also, curious what market research you are citing for the increase in gaming notebooks Jarred. Not doubting your assertion, just would like to see for my own interests/curiosity. I guess with Kepler Nvidia did make decent gaming on a laptop possible, but from my own experiences with gaming laptops, they still tend to overheat and underform, while costing significantly more than a higher spec'd desktop. They also tend to lose a lot of the portability.
I guess if I traveled as much for business as I used to, I would be more interested in something like this, but then again I cringe at the thought of lugging one of these things around in addition to my work laptop and my clothes/pullman etc.
To finish the thought, I would be a lot more open to something like a Shield Tablet or Portable and use the remote GameStream feature to remotely play games on my PC, but of course that is hit or miss depending on the quality of connection where I am travelling.
Heh, the market research is from NVIDIA, who is in turn citing... I'm not entirely sure. And keep in mind that when NVIDIA moved the 850 series from the GT to the GTX line, they inherently gave "gaming notebooks" a much larger piece of the pie. I also have to say that Optimus has helped a lot with the sales of gaming notebooks, as there's less need to compromise these days. Then toss in some nice designs like those from Razer and gaming notebooks can even be sleek and sexy as opposed to big and boxy.
I have no doubt NVIDIA is selling more GPUs for gaming notebooks today than three years ago, and perhaps it's even 5X as many, but certainly a big part of the increase comes from the plateau in gaming requirements. If you're willing to turn off AA (especially SSAA) and run at High detail instead of Ultra, a very large percentage of games run quite well on anything above GTX 670M (or equivalent). Will we continue to see that sort of sales growth for the next three years? Probably not.
Ah ok that makes sense, Nvidia does tend to throw out some interesting figures and given they are on the supply end of things, that can make their counts pretty accurate (ex: Jensen Huang revealed during his Game24 keynote GTX 680 sold ~10m units). They also have the deep pockets to pay for the research from firms like JPR, Mercury Research, Gartner etc.
But I guess you are right as well regarding a lot entrants into this market. About a decade ago it was just Alienware and FalconNW, then it was CyberPower, iBuyPower etc, then the big Taiwanese OEMS got into the game and now you have gaming centric companies like Razer breaking into the market. They are all obviously going after a pie that is getting bigger or they wouldn't bother.
Looking forward to the test results, but again, for my own current usage patterns I can't see myself buying a gaming notebook. When I travel now its usually local, short-term or all business and I carry my work Dell Ultrabook or Surface Pro 3, no time for gaming. Competent laptop gaming would be interesting though for a student or business traveler that travels for most of the week or 50%+.
Yeah absolutely, but for Steamboxes you can already go with a full-sized GTX 970/980 (if you can find one!). But yes these would go great in a BRIX unit, although I personally think the BRIX units with GTX 760 were asking a little bit too much (like $700 I think?).
Where are the integrated G-Sync laptops? Seems like a PRIME market for such a technology. We're already talking high-end prices, so what's another $100-200 for something that mobile GPUs most definitely need?
+1. I was wondering this as well- As a laptop manufacturer, when you're selling the display and the GPU as a single unit it seems like it would be even easier to support the G-Sync technology.
This would be an interesting development and something a premium manufacturer could use to justify their premium price and/or differentiate from themselves from the crowd.
I suspect there isn't room for the the additional circuitry used in G-sync displays. There is also the power factor which isn't great has to be accounted for in a notebook vs. stand alone monitor.
Basically until nVidia fully integrates their G-Sync technology into a chip, I'd be surprised to see it in a laptop.
Yeah, the module and heatsink probably could not be easily integrated into a laptop chassis and the premium on G-Sync is still pretty high at $150-200 over comparable non-G-Sync module. Add to that the possibility some users might choose to dock the laptop to a TV or bigger display and you'd lose G-sync functionality.
Yeah, but then you'd be paying the premium 2x. In the external dispaly scenario I think I'd just go with the single stationary G-sync panel, but then again, I don't really envision myself gaming away from my SOHO. Someone buying a gaming monitor might value it more on the laptop itself I guess.
I am not a big fan of the "Closing the Gap" graph on page 2. The notebook and desktop icons used are very large, and I believe end up tricking the eye. At a casual glance, it is very unclear that the relative performance of the notebook is still only 75% of the desktop; it looks a lot closer. The human eye is drawn to the negative space between the two icons. While in reality only half of the performance gap has closed since the 480M, the graph gives the false impression that far more of the gap has closed, since the empty space between the icons has shrunk by well over 90%.
Not to mention that the performance gap is closing because Nvidia has prioritized efficiency over performance. The desktop parts are neutered in order to improve mobile parts.
All too true. If they had released a 250W GM200, I can guarantee the gap would be back to around 50%. Hahaha... But we need something for the early 2015 update when Broadwell quad-core launches, right?
Also, I didn't mean to slam Nvidia with my other comment. I appreciate their dedication to efficiency, but I also don't mind heating my home with my GPUs if it means face-melting performance.
I don't know, a 3200 core (what GM200 is expected to be) graphics card makes my mouth water, despite what I know its power consumption numbers will be.
Well, GTX 980M is basically twice as fast as GTX 680M, and 680M was roughly 40-50% faster than 580M. That means GTX 980M is around 140-150% faster than GTX 580M. :-)
Weirdly enough, i have the m17x r3 with the 580m and I am not at all tempted! The r3 is good enough for me. My only reason to upgrade now would be a better screen, better battery life, better speakers etc rather than raw power. I wish some companies would take note.
Correct; the batteries in laptops can typically only deliver around 100W, so part of BatteryBoost is working with the manufacturers to improve the notebooks to better use that power.
Well anandtech , I own acer aspire v7 with GT 750M and I was able to play all new games perfectly .. Evil within I played on 720p even with some antialiasing and medium shadows .. And alien isolation on 1080p fluently and all on ultra settings ! Now playing legend of grim rock 2 and it runs ok on 900p So am really asking aren't these top high end GPUs nothing else but just marketing?
no, as you just said, you had to turn down the resolution and the detail settings and shadows to get good performance. some of us want more than "ok" performance on a resolution that was outdated for pc gaming in 2005.
Does anyone know the prices for the 980m /970m, and where I could buy them. I have a Dell inspiring 17 7000 with a gt 750, but I do architecture so my Revit program is starting to slow down like crazy with my big projects. And I also want to play Crysis 3 with no problems lol :) thanks.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
68 Comments
Back to Article
scottrichardson - Tuesday, October 7, 2014 - link
One can only assume this is the GPU that Apple will slot into their upcoming 'Retina' iMacs, unless the AMD rumours hold true?tviceman - Tuesday, October 7, 2014 - link
It would be crazy of Apple to use Tonga (the rumor) instead of GM204. If Apple really did choose AMD, Nvidia must not have been willing to budge on price.RussianSensation - Tuesday, October 7, 2014 - link
Actually they wouldn't be crazy. There are at least 2 reasons why this rumour could be true:1) Nvidia is filing a lot of patent infringement lawsuits agains Samsung, Qualcomm and even Apple. Rumour has it as a retaliation, Apple will move some of its products to AMD until NV either withdraws the lawsuit or decides to settle at more agreeable terms.
2) NV's GM204 is not cheap and it certainly won't be cheap for the mobile sector. When you are faced with trying to price the iMac 5K without a stratospheric price, you might want to go with a cheaper GPU because that new 5K display will cost an arm and a leg. In other words, since Macs are hardly used for gaming to begin with, you simply balance your priorities to hit appropriate price points your customers will pay based on historical data. It's possible that the inclusion of GM204 would force a new higher end SKU of the iMac when combined with a 5K display.
3) Manufacturing supply. As you can see from the availability of desktop GM204 chips, there are supply issues. Apple might has required NV to provide a certain amount of GM204 chips and Nvidia couldn't guarantee that many in xx time frame.
Now, all of these are rumours and Apple could still use GM204, or GM204 + AMD GPU + Intel GPU in various SKUs of the iMac. However, there are 3 legitimate reasons why Apple would not use Nvidia's GM204 in the next gen iMac.
Doormat - Tuesday, October 7, 2014 - link
4) As shown in the chart on page 2, the max resolution of the Maxwell GPU is 4K. If Apple is going to field a 5K display in the 27" iMac, they will need a GPU that supports this resolution, likely requiring displayport 1.3 (or some draft version of DP 1.3 that was available when the chips and panel were being designed).Morawka - Tuesday, October 7, 2014 - link
The patent deal has nothing to do with apple because apple does not make gpu's.NV's 204 has the best perf per watt by a huge margin. We all know that having the best perf per watt is going to cost more than perf per dollar.
With that said, apple will probably go with AMD for this years models because apple always flip flop's gpu vendors between generations.
However, it will run hotter, use more power. And historicly AMD/ATI GPU's within apple products have seen a much higher rate of recall and defects in the past.
I'm not sure how a mobile amd chip will perform at 5K. The desktop gpu's seem up to the task but we haven't seen a really good mobile amd discrete gpu in a long time.
name99 - Tuesday, October 7, 2014 - link
nVidia has filed lawsuits against Samsung and Qualcomm. NOT Apple.They may have plans to sue Apple --- anyone may have plans to do anything --- but right now they have not sued Apple, have not threatened to sue Apple, have not even hinted or suggested or implied that they want to sue Apple.
Manufacturing supply (if this rumor is true) strikes me as far more likely. There is a long history of Apple doing things that seemed (especially to haters) as perverse limitations based on maximally trying to screw their customers over, only for us to learn later that they were limited by supply issues. When you're shipping Apple volumes, you CAN'T simply wish that millions of the last doodab were available when they can only be manufactured in the hundreds of thousands.
We saw this with fingerprint sensors (restricted to iPhone 5S, not 5C, touch or iPads), probably with sapphire on iPhone 6's, and probably with low power RAM on iPhone 6's (that being what's restricting them to 1GB, not some nefarious Apple plan).
Of course for this rumor to be true and the explanation to be relevant requires that AMD can manufacture faster than nV (or has acquired a large inventory). It's not clear to me that this has to be true...
chizow - Tuesday, October 7, 2014 - link
It's not surprising actually, AMD is probably throwing these chips at Apple for next to nothing, and Apple probably feels at least some obligation to prop up OpenCL, the standard they championed way back when, over using the proprietary CUDA, by using AMD chips that clearly aren't as good as Nvidia would be for their needs (performance and low TDP).Apple's hubris probably leads them to believe they can live without Nvidia and weather the growing pains. My personal experience here at work is that users who rely on Adobe CS have simply shifted to Windows-based workstations with Quadro parts rather than deal with the Mac Pro 2013 GPU/driver issues.
What will really be interesting to see is what Apple does with their MBP, where Kepler was the obvious choice due to efficiency. Maxwell would really shine there, but will Apple be willing to take a big step backwards in performance there just to stay consistent with their AMD trend?
Omoronovo - Tuesday, October 7, 2014 - link
That's certainly possible, but bear in mind that Apple generally doesn't have current top-end mobile hardware in their iMAcs - That either means a late arrival for iMacs or perhaps a 960M when it releases.I doubt AMD will ever get back into Apple's good graces as none of their GPU's have met the power/performance levels nVidia has - bear in mind that price is usually a secondary concern for Apple and their consumers, so those are the only two metrics that really matter here.
WinterCharm - Tuesday, October 7, 2014 - link
Yeah, not to mention that many people were disappointed when Apple included AMD graphics in their Mac Pro. I *need* CUDA. I can't live without it, and many other professionals will tell you the same thing.RussianSensation - Tuesday, October 7, 2014 - link
This is exactly why open standards like OpenCL should be embraced. Saying things like I can't live without "insert a proprietary/locked GPU feature" is what segregates he industry. A lot of programs benefit from OpenCL and AMD GPUs provide full GPU hardware acceleration for the Adobe Suite (Creative, Mercury, Premiere, etc.) and so on:http://www.amd.com/en-us/press-releases/Pages/amd-...
http://helpx.adobe.com/premiere-pro/system-require...
Also, the iMac is not a workstation which means the primary target market of iMacs doesn't perform professional work with CUDA. If you are really in need of professional graphics, you are using FirePro or Quadro in a desktop workstation or getting MacPro.
chizow - Tuesday, October 7, 2014 - link
Except most professionals don't want to be part of an ongoing beta project, they want things to just work. Have you followed the Adobe CS/Premiere developments and OpenCL support fiasco with the Mac Pros, and how much of a PITA they have been for end-users? People in the real world, especially in these pro industries that are heavily iterative, time sensitive, and time intensive cannot afford to lose days, weeks or months waiting for Apple, AMD and Adobe to sort out their problems.JlHADJOE - Tuesday, October 14, 2014 - link
This. As cool as open standards are, it's also important to not stuck. The industry has shown that it will embrace open when it works. Majority of web servers use open source, and Linux has effectively displaced UNIX out of all sectors except for extreme big-iron.But given a choice between open and "actually working", people will choose "working" every time. IE6 became the standard for so long because all of the "open" and "standards-compliant" browsers sucked for a very long time.
mrrvlad - Thursday, October 9, 2014 - link
I have to work with both CUDA and openCL (for amd GPU) for compute workloads. The main advantage of CUDA is thier toolset - AMD compiler is ages behind and does not allow developer sufficient control of the code being generated. It's more of "randomizing" compiler, not optimizing... I would never even think about using openCL for GPU compute if I will start a new project now.Ninjawithagun - Sunday, June 28, 2015 - link
The problem is that what you are stating is only half the story. Unfortunately, each company does have a superior solution. Going with OpenGL is a compromise at best because the coding does not maximize or optimize automatically for hardware specific architectures. Maybe in a perfect world we would have an open non-proprietary standard across all programming schemes, but it's just not possible. Competition and more importantly profit is the key to making money and neither AMD or Nvidia will budge. Both parties are just as guilty as the other in this aspect.atlantico - Wednesday, October 15, 2014 - link
Apple will *never* embrace CUDA. OpenCL is an important part of the future vision and strategy of Apple, whatever Nvidia is pushing, Apple is not buying.RussianSensation - Tuesday, October 7, 2014 - link
If all Apple cares about was performance/watt, the MacPro would not feature AMD's Tahiti cores. There is a paragraph even dedicated to explaining the importance of OpenCL for Apple:"GPU computing with OpenCL.
OpenCL lets you tap into the parallel computing power of modern GPUs and multicore CPUs to accelerate compute-intensive tasks in your Mac apps. Use OpenCL to incorporate advanced numerical and data analytics features, perform cutting-edge image and media processing, and deliver accurate physics simulations."
https://www.apple.com/mac-pro/performance/
Apple is known to switch between NV and AMD. Stating that AMD is not in good graces with Apple is ridiculous considering the MacPro has the less power efficient Tahiti vs. GK104s. And that is for a reason -- because HD7990 beats 690 at 4K, and destroys it in compute tasks -- which is proof performance/watt is no the only factor Apple looks at for their GPU selection.
Omoronovo - Wednesday, October 8, 2014 - link
I didn't mean to imply that it was *literally* the only factors taken into account; they clearly wouldn't use a GPU that cost $3,000 if a competing one with similar (but worse) performance/watt was $300.I was trying to emphasize that, all factors being equal - ie, standards compliance, compatibility, supply, etc, then performance/watt is the prime metric used to determine hardware choices. The tahiti vs GK104 comparison is a great one - AMD has extremely heavily pushed OpenCL and their support for it was basically unanimous - nVidia was slow on the uptake of OpenCL support as they were pushing for CUDA.
bischofs - Tuesday, October 7, 2014 - link
I may be wrong, but it seems like the only reason the mobile chips are catching up to the desktop is that they haven't really improved PC cards in 5+ years. Instead of pushing the limits on the PC, building architectures that are based on pure performance and not efficiency, and scaling it down they are doing the opposite, thus the performance difference is getting smaller. It is strange that they are marketing this as a good thing being that there is a rather large difference in power and cooling availability on a tower, thus there should be a large performance gap.Razyre - Tuesday, October 7, 2014 - link
Not at all. Haiwaii shows this if anything, the 290X is balls to the walls in OpenCL, while Nvidia's cards are more conservative and gaming optimised they still pack an as good and usually better punch in frame rates.Cards are getting too hot at 2-300W, you need silly cooling solutions which are either expensive, make your card larger or louder.
The Maxwell series is phenomenal; it drastically improves frame rates while halving the power consumption of the same series chips from 2 years ago.
GPUs have come on SO far since 2009 when you are touting they've barely improved. Let's say you pit a 5870 against a 290X. The 7970 is about twice as powerful as the 5870 (slightly less in places), a 2012 GPU and the current 290X is about 30% better than a 7970. So you're effectively seeing there a theoretical 130% improvement in performance over 4 years (I say this because the 290X is now a year old), so that's an average of 30%ish improvement per year.
Considering the massive R&D costs and costs associated with moving to smaller dies to fit more transistors on a chip (which increases heat, hence why Nvidia's Maxwell is a great leap since they can now jam way more transistors on there for the GK110 replacement) GPUs have come on leaps and bounds.
The only reason it might look like they haven't is because instead of jumping from a standard let's say 1680x1050 to 1920x1080, we jumped to 3840x2160 a FOUR TIMES increase in resolution.
Mobile GPUs are even more impressive in progress really. That chart showing the performance closing between mobile and desktop GPUs isn't too untrue.
bischofs - Tuesday, October 7, 2014 - link
I don't know much about the AMD stuff you are talking about, but I have probably more anecdotal evidence. Software and more importantly games for PCs have been pretty stagnant as far as resources go, I used a GTX 260 card for about 5 years and never had problems running anything until recently. Seeing as Games are the largest driver of innovation most games are built for consoles, with a large percentage also being built on mobile devices. Ive been playing games that look pretty much the same at 1080p for 5+ years on my pc, the only thing that has been added is more graphical features. Further support of my argument is processors, I remember the jump to Nehalem from the Core 2 was astounding, but from then on ( still running my i7-920 from 2008 ) its been lower power consumption and more cores with small changes in architecture. So you might throw some percentages around but I just don't see it.ruggia - Tuesday, October 7, 2014 - link
Well, I think you meant stagnation in 1080p gaming requirement, not stagnation in GPU performance. PC cards have indeed reached the required performance of 1080p gaming 5 years ago. Since then, focus have been shifted on supporting higher resolutions (1440p/4k) or higher frame rates (100+Hz), which haven't been possible 5 years ago. Right now, even a low end card can easily best your GTX260 card without problem.ezorb - Tuesday, October 7, 2014 - link
I have an I7-920@3.2Ghz and a 2600k@4.5Ghz both rock solid. The single generation is just as great as the Core 2 to Nehalem , and the power consumption and noise are half on the sandy bridge and even less at idol. The 4770k@stock Ghz is faster than the 2600k@4.5Ghz, so I disagree, but clearly you have LOW standards cause i ditched the GTX 260 for a 5870, which was a massive improvmentEzioAs - Tuesday, October 7, 2014 - link
By your assumption, a 290X should be around 167% faster than the 5870...therefore an average performance increase of 41% per year.RussianSensation - Tuesday, October 7, 2014 - link
EzioAs,290X is 2.95X faster (or 195% faster) than an HD5870. Here are the breakdowns from 2009:
AMD = using 5870 as base of 100%
290X (max) = 295% (so nearly 3x faster)
HD7970Ghz = 229%
HD6970 = 129%
HD5870 = 100%
Nvidia = using 480 as base of 100%
980 (extrapolated from 780Ti) = 261%
780Ti = 248%
GTX680 = 162%
GTX580 = 119%
GTX480 = 100%
http://www.computerbase.de/2013-12/grafikkarten-20...
and
http://www.computerbase.de/2014-09/geforce-gtx-980...
Moving from 5870 (score of 30.1) vs. 980 (score of 105.1) is an increase of 3.49X. Therefore, GPU performance increased ~ 3.5X from the time HD5870 launched in Sept 2009 if one were to get a GTX980 today or an average increase of ~36.7% per year from 5870's launch.
nathanddrews - Tuesday, October 7, 2014 - link
60% of the time it works every time.EzioAs - Tuesday, October 7, 2014 - link
I was just correcting Razyre.His number indicates that the 7970 is 2x faster than the 5870 and the 290X is 30% faster than the 7970. You do the math, and you get that the 290X is 2.6x faster than the 5870.
RussianSensation - Tuesday, October 7, 2014 - link
Some good points. However, the only reason 980M is much closer to the 980 is because 980 is a mid-range Maxwell, GM200/210 will be high-end. The comparison NV uses is completely flawed because 880M is a mid-range Kepler (GK104) so of course it will be 50-60% as fast as the flagship of that generation - 780Ti. The same is true for 480 vs. 480M and 580 vs. 580M because 480 and 580 were flagship GPUs.Let's see what happens when GM200/210 come out and compare it to 1080M GM204 and the performance will grow again in favour of the desktop flagship GPU. NV is just playing marketing right now comparing a high-end mobile Maxwell card against a mid-range Maxwell desktop card by calling 980 a "flagship" desktop Maxwell chip -- which it isn't, even remotely.
azazel1024 - Tuesday, October 7, 2014 - link
It depends on how you define "not improved". My AMD 5570 just died a couple of days ago and I ordered a GTX750. Looking at just raw numbers, it is between 150-250% faster than my 5570. I'll grant it is maybe closer to a 5670 in terms of locating it on the performance tier...but even against a 5670 it is at a minimum >100% faster in everything and in most things still in the area of >150% faster. That is in basically 4 years, for the same price and very slightly higher to somewhat lower power consumption, you get more than doubled the performance.Sure, I wouldn't mind more, but I also won't complain either.
tviceman - Tuesday, October 7, 2014 - link
The performance delta between the 980m and 970m is disappointing. Since notebook parts typically don't overclock as well, a smaller delta of ~20% would have been sufficiently enough to keep the 980m well in front of the 970m. As it sits now, 30% is really, really large.EzioAs - Tuesday, October 7, 2014 - link
This is quite normal for Nvidia mobile GPUs right? Considering the price delta will also be just as highMrSpadge - Tuesday, October 7, 2014 - link
GTX970M is what the rumor mill says a full GM206 will be. So it's likely nVidia uses heavily crippled GM204 chips for the first batches and later switches to the more economical smaller chip.RussianSensation - Tuesday, October 7, 2014 - link
More disappointing to me is 980M's absence in thin and light form-factor laptops such as MSI GS60 and GS70 models. All the other "laptops" in the list with 980 are basically briefcases/bricks with a screen attached to them. It seems if you want a light and portable 15 or 17 inch laptop that doesn't weight as much as a small printer, you have to get the 970M or 970M SLI in the Aorus X7.xype - Tuesday, October 7, 2014 - link
Hm, what resolution are those NVIDIA numbers from? I guess playing modern games smoothly on a retina MBP—if it ships with these GPUs—is still out of the question?I haven’t checked recently, but do games offer no AA modes for such high resolution screens (and does it make a difference performance-wise)?
DanNeely - Tuesday, October 7, 2014 - link
nVidia is claiming beyond 1080p; and with the desktop 970 (slightly faster than the 980M) generally able to do 1440p with 4xAA while still being playable; you might be able to play in native resolution with AA disabled.On the PC side, AA is almost always a user configurable setting. I'd assume they'd keep the option on mac ports, but don't have one to check.
JarredWalton - Tuesday, October 7, 2014 - link
Sorry, I apparently missed including the "1080p" part before the NVIDIA figures. Yeah, I know -- they're claiming "beyond 1080p" but testing at 1080p. They also target 30+ FPS as "playable", so many of the games coming out now will still be able to run at >30 FPS and 3K+ resolutions, though we might need to disable anti-aliasing in some cases to get there.xype - Tuesday, October 7, 2014 - link
Cool, thanks for the update!xype - Tuesday, October 7, 2014 - link
It’s a setting, but I haven’t seen "no AA" for ages—which kinda sucks if you’re on a retina screen and don’t need it as much in the first place. But the 1440p info is appreciated, that’s the kind of numbers I was wondering about. Thanks :)ClockworkPirate - Tuesday, October 7, 2014 - link
The current iMacs are configurable up to a 780m (not that it's worth the money... :P)ekg84 - Tuesday, October 7, 2014 - link
I really don't think apple would opt for current AMD gpu's, even considering their great openGL performance. Main reason is crappy efficiency compared to maxwell which i think Apple cares big deal about.On another note, i'm looking at those gtx 970m specs and it looks like this is what upcoming desktop gtx 960 could look like, with ramped up clocks of course.
chizow - Tuesday, October 7, 2014 - link
Interesting that Nvidia decided to go with a harvested die at near-full clockspeeds instead of a chip with all functional units intact but with reduced clockspeeds. I guess this does allow them a clear upgrade path for future product lines while retaining a good chunk of GM204's performance.Also, curious what market research you are citing for the increase in gaming notebooks Jarred. Not doubting your assertion, just would like to see for my own interests/curiosity. I guess with Kepler Nvidia did make decent gaming on a laptop possible, but from my own experiences with gaming laptops, they still tend to overheat and underform, while costing significantly more than a higher spec'd desktop. They also tend to lose a lot of the portability.
I guess if I traveled as much for business as I used to, I would be more interested in something like this, but then again I cringe at the thought of lugging one of these things around in addition to my work laptop and my clothes/pullman etc.
chizow - Tuesday, October 7, 2014 - link
To finish the thought, I would be a lot more open to something like a Shield Tablet or Portable and use the remote GameStream feature to remotely play games on my PC, but of course that is hit or miss depending on the quality of connection where I am travelling.JarredWalton - Tuesday, October 7, 2014 - link
Heh, the market research is from NVIDIA, who is in turn citing... I'm not entirely sure. And keep in mind that when NVIDIA moved the 850 series from the GT to the GTX line, they inherently gave "gaming notebooks" a much larger piece of the pie. I also have to say that Optimus has helped a lot with the sales of gaming notebooks, as there's less need to compromise these days. Then toss in some nice designs like those from Razer and gaming notebooks can even be sleek and sexy as opposed to big and boxy.I have no doubt NVIDIA is selling more GPUs for gaming notebooks today than three years ago, and perhaps it's even 5X as many, but certainly a big part of the increase comes from the plateau in gaming requirements. If you're willing to turn off AA (especially SSAA) and run at High detail instead of Ultra, a very large percentage of games run quite well on anything above GTX 670M (or equivalent). Will we continue to see that sort of sales growth for the next three years? Probably not.
chizow - Tuesday, October 7, 2014 - link
Ah ok that makes sense, Nvidia does tend to throw out some interesting figures and given they are on the supply end of things, that can make their counts pretty accurate (ex: Jensen Huang revealed during his Game24 keynote GTX 680 sold ~10m units). They also have the deep pockets to pay for the research from firms like JPR, Mercury Research, Gartner etc.But I guess you are right as well regarding a lot entrants into this market. About a decade ago it was just Alienware and FalconNW, then it was CyberPower, iBuyPower etc, then the big Taiwanese OEMS got into the game and now you have gaming centric companies like Razer breaking into the market. They are all obviously going after a pie that is getting bigger or they wouldn't bother.
Looking forward to the test results, but again, for my own current usage patterns I can't see myself buying a gaming notebook. When I travel now its usually local, short-term or all business and I carry my work Dell Ultrabook or Surface Pro 3, no time for gaming. Competent laptop gaming would be interesting though for a student or business traveler that travels for most of the week or 50%+.
jtd871 - Tuesday, October 7, 2014 - link
Where these could really shine is in SFF/Brix/NUC-like units, especially if the thermals and prices are reasonable. Steamboxen, anyone?chizow - Tuesday, October 7, 2014 - link
Yeah absolutely, but for Steamboxes you can already go with a full-sized GTX 970/980 (if you can find one!). But yes these would go great in a BRIX unit, although I personally think the BRIX units with GTX 760 were asking a little bit too much (like $700 I think?).nathanddrews - Tuesday, October 7, 2014 - link
Where are the integrated G-Sync laptops? Seems like a PRIME market for such a technology. We're already talking high-end prices, so what's another $100-200 for something that mobile GPUs most definitely need?BigT383 - Tuesday, October 7, 2014 - link
+1. I was wondering this as well- As a laptop manufacturer, when you're selling the display and the GPU as a single unit it seems like it would be even easier to support the G-Sync technology.Kevin G - Tuesday, October 7, 2014 - link
This would be an interesting development and something a premium manufacturer could use to justify their premium price and/or differentiate from themselves from the crowd.I suspect there isn't room for the the additional circuitry used in G-sync displays. There is also the power factor which isn't great has to be accounted for in a notebook vs. stand alone monitor.
Basically until nVidia fully integrates their G-Sync technology into a chip, I'd be surprised to see it in a laptop.
chizow - Tuesday, October 7, 2014 - link
Yeah, the module and heatsink probably could not be easily integrated into a laptop chassis and the premium on G-Sync is still pretty high at $150-200 over comparable non-G-Sync module. Add to that the possibility some users might choose to dock the laptop to a TV or bigger display and you'd lose G-sync functionality.nathanddrews - Tuesday, October 7, 2014 - link
Not if you connect to a G-Sync display using DP (assuming the notebooks have DP out).chizow - Tuesday, October 7, 2014 - link
Yeah, but then you'd be paying the premium 2x. In the external dispaly scenario I think I'd just go with the single stationary G-sync panel, but then again, I don't really envision myself gaming away from my SOHO. Someone buying a gaming monitor might value it more on the laptop itself I guess.nathanddrews - Wednesday, October 8, 2014 - link
Not really 2X. A G-Sync monitor is a small percentage of the total system cost of a 970M/980M laptop.johnthacker - Tuesday, October 7, 2014 - link
I am not a big fan of the "Closing the Gap" graph on page 2. The notebook and desktop icons used are very large, and I believe end up tricking the eye. At a casual glance, it is very unclear that the relative performance of the notebook is still only 75% of the desktop; it looks a lot closer. The human eye is drawn to the negative space between the two icons. While in reality only half of the performance gap has closed since the 480M, the graph gives the false impression that far more of the gap has closed, since the empty space between the icons has shrunk by well over 90%.nathanddrews - Tuesday, October 7, 2014 - link
Not to mention that the performance gap is closing because Nvidia has prioritized efficiency over performance. The desktop parts are neutered in order to improve mobile parts.JarredWalton - Tuesday, October 7, 2014 - link
All too true. If they had released a 250W GM200, I can guarantee the gap would be back to around 50%. Hahaha... But we need something for the early 2015 update when Broadwell quad-core launches, right?nathanddrews - Tuesday, October 7, 2014 - link
One can only hope!Also, I didn't mean to slam Nvidia with my other comment. I appreciate their dedication to efficiency, but I also don't mind heating my home with my GPUs if it means face-melting performance.
darth415 - Tuesday, October 7, 2014 - link
I don't know, a 3200 core (what GM200 is expected to be) graphics card makes my mouth water, despite what I know its power consumption numbers will be.adityarjun - Tuesday, October 7, 2014 - link
How much faster would the 980m be as compared to the 580m, I wonder?JarredWalton - Tuesday, October 7, 2014 - link
Well, GTX 980M is basically twice as fast as GTX 680M, and 680M was roughly 40-50% faster than 580M. That means GTX 980M is around 140-150% faster than GTX 580M. :-)http://www.anandtech.com/show/5914/nvidia-geforce-...
adityarjun - Wednesday, October 8, 2014 - link
Weirdly enough, i have the m17x r3 with the 580m and I am not at all tempted! The r3 is good enough for me.My only reason to upgrade now would be a better screen, better battery life, better speakers etc rather than raw power. I wish some companies would take note.
ignigena - Monday, October 13, 2014 - link
check your maths. 280-300% based on the numbers you have given.zqw - Tuesday, October 7, 2014 - link
Please include in notebook reviews if Optimus can by bypassed, and the Intel GPU hidden. Armies of Rift DK2 devs will thank you.It would also be interesting to test throttling on battery, and watt/hr claims. I'm assuming even gaming laptop batteries can't deliver 150-200W?
JarredWalton - Tuesday, October 7, 2014 - link
Correct; the batteries in laptops can typically only deliver around 100W, so part of BatteryBoost is working with the manufacturers to improve the notebooks to better use that power.blah238 - Tuesday, October 7, 2014 - link
HDMI 2.0?elbert - Wednesday, October 8, 2014 - link
Higher clocked core and memory and you have the gtx960 and gtx960ti. I think the gtx960ti is due out early December.Ethos Evoss - Sunday, November 2, 2014 - link
Well anandtech , I own acer aspire v7 with GT 750M and I was able to play all new games perfectly .. Evil within I played on 720p even with some antialiasing and medium shadows ..And alien isolation on 1080p fluently and all on ultra settings !
Now playing legend of grim rock 2 and it runs ok on 900p
So am really asking aren't these top high end GPUs nothing else but just marketing?
TheinsanegamerN - Thursday, January 1, 2015 - link
no, as you just said, you had to turn down the resolution and the detail settings and shadows to get good performance. some of us want more than "ok" performance on a resolution that was outdated for pc gaming in 2005.KwasiJr55 - Friday, December 5, 2014 - link
Does anyone know the prices for the 980m /970m, and where I could buy them. I have a Dell inspiring 17 7000 with a gt 750, but I do architecture so my Revit program is starting to slow down like crazy with my big projects. And I also want to play Crysis 3 with no problems lol :)thanks.