Review AMD Radeon RX 9060 XT 16GB review: plenty of performance with 16GB

Feels like if one is going to step up to a 16 GB model, the 5060 Ti looks like a better choice for $40 more. Otherwise one is just looking to save $90 by sticking with the 8 GB model.
 
AMD showing again why they don't care about gaining market share: they have a product that can compete with Nvidia, but they don't price it anywhere near what it would take to get people to buy it if they're already Nvidia users.
That's because it cannot compete. It's a bit worse, for a bit more power, if you're a gamer. Non-gaming workloads run abysmally bad compared to Nvidia, due to AMD's neglect of HIP, ROCm, and any effort to make pro workloads run well.
This is not a competitive product, that's why it should be priced way lower.
 
but don't forget 5060ti has lower performance on older MB's cause of narrow lanes and all those hoping to upgrade their older system with 5060 series should also buy new MB+CPU+MEM to gain full advantage of 5060ti
 
but don't forget 5060ti has lower performance on older MB's cause of narrow lanes and all those hoping to upgrade their older system with 5060 series should also buy new MB+CPU+MEM to gain full advantage of 5060ti

Ehh that really depends. PCI-E bandwidth, which is what you are talking about, is only involved when data gets transmitted from system RAM to GPU VRAM. When you have plenty of VRAM then you really don't need to worry about that, if you are in a VRAM constrained situation which requires graphics resources to be swaped in and out of system RAM across the PCIe bus. PCIe 4 x16 slot is 32GB/s one way, PCIe 5.0 x16 is 64GB/s one way. System memory is much faster and therefor not the bottleneck. Honestly if someone is in such a situation that are you swaping texture data across the PCIe bus, they are already having a bad experience and need to either turn down texture resolution or upgrade to a newer card.
 
I had no problem getting the XFX model of the 16 gig for MSRP ($350) via newegg. Lots of $350 models out there. If you live near a Microcenter it's even better selection.

One major thing to note about the powercolor reaper: it's only 200mm x 39mm for a 16 gb version. That makes it one of the best cards for some smaller SFF builds out there for the money.
 
Feels like if one is going to step up to a 16 GB model, the 5060 Ti looks like a better choice for $40 more. Otherwise one is just looking to save $90 by sticking with the 8 GB model.
It all depends on retail pricing. Right now, there's one $379 9060 XT 16GB at Newegg available. There are several $349 models that are out of stock, but there's no indication whether they'll ever be in stock again. Hopefully! But recent history makes that a question mark.

RTX 5060 Ti 16GB is supposed to start at $429, so $50 extra, and right now there are a couple cards showing up at Best Buy (which means local availability is going to be hit and miss in my experience — dealing with Best Buy has been a joke since the 30-series days). But I guess they're shipping now without making it hard to get.

So yeah, as I noted in the 5060 Ti 16GB review, it's really about availability and pricing. I gave the 5060 Ti 16GB and 9060 XT 16GB the same score, loosely based on hope for MSRP pricing to become reality. It took a bit on the 5060 Ti, but there are at least GPUs at MSRP available for purchase.

Overall, it's ~7% higher performance from the 5060 Ti for 13% more money at current minimum prices. If AMD's GPU drops to the $349 MSRP, then it's 7% more performance from Nvidia for 23% more money. I still say Nvidia's extra features (DLSS mostly, better RT and some AI stuff as well) are worth about a 10~20 percent price premium, and that's basically where we're at.

The user experience with the 8GB cards is quite poor in a lot of games. You can fix it by turning down settings, usually, but when it's just the cost of a single game to ensure you're not flirting with running out of VRAM, I just don't see a good reason to purchase built-in obsolescence. The 4060 Ti 8GB was a poor move two years ago. Today, the 5060 Ti 8GB and 9060 XT 8GB models are a joke, and I'd say the same for the RTX 5060.

Nvidia really should have just pushed all the GDDR7 manufacturers to start with 24Gb (3GB) chips and forget all about 2GB chips. That would have been so, so, SO much better than what we now have. Imagine if the RTX 50-series lineup were:

RTX 5060 12GB @ $329
RTX 5060 Ti 12GB @ $399
RTX 5060 18GB @ $549
RTX 5070 Ti 24GB @ $799
RTX 5080 24GB @ $1,049
RTX 5090 48GB @ $1,999

That would have been everything I wanted to see. Instead, we got slightly lower prices at each GPU tier, with clearly inferior offerings due to a lack of VRAM. The only cards where you can maybe make an argument for using 2GB GDDR7 chips are the 5070 Ti (16GB is 'enough' mostly), and the 5090 (32GB is definitely sufficient for non-professional workloads). Maybe the RTX 5060 could justify 8GB if its price was lower ($249), but even so I think 12GB is the bare minimum for a new GPU in 2025.

All the RTX Pro Blackwell GPUs will likely use 3GB GDDR7 chips, so it was a market segmentation decision that's just screwing over gamers. Again. If the RAM companies had simply skipped 2GB modules at launch and focused on 3GB, for 50% more money per chip, it would have been a far better solution.
I was hoping to see a 9060 XT 16 vs 8 GB charts the same as the 5060 Ti has a way to see where the cutoff is instead of the misinformation that gets spread. It's also entirely what the market cost is gonna be at.
AMD didn't send an 8GB sample for review, and the 5060 Ti 8GB card was actually one I myself purchased (after I left Tom's Hardware for greener pastures). Neither company really wants people posting benchmarks that show how 8GB struggles at higher settings for $50 less money.
 
AMD didn't send an 8GB sample for review, and the 5060 Ti 8GB card was actually one I myself purchased (after I left Tom's Hardware for greener pastures). Neither company really wants people posting benchmarks that show how 8GB struggles at higher settings for $50 less money.

Ahh didn't know that you guys didn't have one on hand. Looking at the 5060 TI 8 vs 16 I don't see any "struggling".

1440p ultra, which shows both models entering into uncomfortable territory with the 16GB barely being at 60 with the 8 being a few FPS under. 1440p ultra really should be a higher tier SKU then these entry level cards.

V4BQM9a26mGJ3cwkK8ciK3-970-80.png.webp


At 2160p ultra, none of those cards are playable, going from a 5060 TI / 9060 XT 8GB to 16GB won't make the experience not suck. Either upgrade to an enthusiast SKU or turn down the resolution / settings.

yd3s8VLFjm49v2YExsR6K3-970-80.png.webp



I separate technical viability from consumer value as those are entirely separate questions with the first being objective and the second being subjective. Objectively the 8GB cards are fine for 1080p high ~ ultra and 1440p high or lower. Beyond that you absolutely need 12GB and above, of course you also need more compute / memory IO that isn't usually found on these entry level 128-bit GPUs. Value wise, while I personally don't think the 8GB values are worth buying, assuming everything is MSRP, I can't make that decision for all consumers.
 
Ahh didn't know that you guys didn't have one on hand. Looking at the 5060 TI 8 vs 16 I don't see any "struggling".

1440p ultra, which shows both models entering into uncomfortable territory with the 16GB barely being at 60 with the 8 being a few FPS under. 1440p ultra really should be a higher tier SKU then these entry level cards.

V4BQM9a26mGJ3cwkK8ciK3-970-80.png.webp


At 2160p ultra, none of those cards are playable, going from a 5060 TI / 9060 XT 8GB to 16GB won't make the experience not suck. Either upgrade to an enthusiast SKU or turn down the resolution / settings.

yd3s8VLFjm49v2YExsR6K3-970-80.png.webp



I separate technical viability from consumer value as those are entirely separate questions with the first being objective and the second being subjective. Objectively the 8GB cards are fine for 1080p high ~ ultra and 1440p high or lower. Beyond that you absolutely need 12GB and above, of course you also need more compute / memory IO that isn't usually found on these entry level 128-bit GPUs. Value wise, while I personally don't think the 8GB values are worth buying, assuming everything is MSRP, I can't make that decision for all consumers.
The geomean does obfuscate things. 18 games showing a 12.4% difference overall may not seem like much, but that's also more of the "best-case" scenario for the 8GB card. Like, things where I can just run the benchmarks one after the other on a 16GB card, on the 8GB card you can get anomalies. What you end up with is about 70% of the games do fine at 1440p ultra on the 8GB card, but the remaining 30% can show significant issues.

At 1440p ultra: Cyberpunk shows a 55% advantage for 16GB, F1 24 is 15%, Assassin's Creed is 15%, Dragon Age is 24%, Flight Simulator 2024 is 13%, Horizon Forbidden West at 38%, Spider-Man shows 9%, and Stalker 2 is 33%. And for sure several of those seven games can perform MUCH worse than what the charts show. Try playing for an hour on the 8GB card at 1440p ultra and you may encounter massive stutters on a regular basis, and at times performance can drop into the single digits (from 40+ FPS).

Look at the 1% lows as well. The 5060 Ti 16GB shows a 24% lead in minimum FPS compared to the 8GB card. That indicates there's a lot more hitching and stuttering going on (because there is). And you're also showing the rasterization geomean charts, which are less demanding games where the VRAM deficiency isn't as pronounced. Granted, I only have four RT games (had to drop Control due to the last patch), but there are still four games (out of 14) that showed a 10% or larger difference in average FPS, and six games showed a 20% or greater difference in 1% lows — with Stalker 2 showing a 146% advantage on minimum FPS.

And that's not just at 1440p and 4K ultra. You can get similar behavior at 1080p ultra at times, and that means you'll almost certainly encounter some games where that sort of thing also happens at 1080p high or 1440p high. Basically, you're spending $50 extra (depending on current prices) to wash away a whole bunch of idiosyncrasies and oddities. I can't stress that enough.

The charts tell one story, but I'm being absolutely frank in saying that it's not the full story. I tried to explain that in the review text. Every time you encounter a major showstopper (like how about Indiana Jones just crashing to desktop at ultra settings with a VRAM allocation error), that's a huge deal. The 5060 Ti doesn't have a ton of compute, so that 16GB feels a bit overkill and I still wish we had the middle ground 12GB option using 3GB chips... but it has enough compute that 8GB becomes a massive concern. And the same undoubtedly applies to the 9060 XT 8GB. In fact, in my experience (e.g. RX 7600 XT vs RX 7600), AMD has even more issues than Nvidia when running 8GB.
 
Hi Jarred, I'll take this opportunity to say goodbye. So, uh, GOODBYE!

Heh. No, really, it's good in that you got out in time. Consumer GPU isn't exactly a growth market, and you have enough runway to have a full second career. So, congrats on the new career path.

Hope to see you around in some capacity in the tech sphere. Consumer AI accelerator, maybe? Not goodbye, then, but so long.

NB: I'm a bit miffed that for your final work here, THW can't see fit to give you proper credit, and your article is shunted into a corner of the page in favor of some filler-of-the-day piece. Oh, well, that's corporate appreciation for you, eh? No pocket watch, just a kick in the derriere and a bus ticket out of town.
Cheers. I actually asked to have my name omitted from the author field. It's better for me that way. LOL There's no conflict of interest, but I just wanted it to be less about me due to the new job.
 
  • Like
Reactions: JamesJones44
One major thing to note about the powercolor reaper: it's only 200mm x 39mm for a 16 gb version. That makes it one of the best cards for some smaller SFF builds out there for the money.

That's right and looks like they did a fantastic job keeping their temps & noise in check. For such small card, this is impressive, most of the times you have the smaller cards being quite loud.
 
The geomean does obfuscate things. 18 games showing a 12.4% difference overall may not seem like much, but that's also more of the "best-case" scenario for the 8GB card. Like, things where I can just run the benchmarks one after the other on a 16GB card, on the 8GB card you can get anomalies. What you end up with is about 70% of the games do fine at 1440p ultra on the 8GB card, but the remaining 30% can show significant issues.

Kinda long but I want to be accurate

My statement
Objectively the 8GB cards are fine for 1080p high ~ ultra and 1440p high or lower

The 1440p ultra, specifically having texture quality (resolution) set to 4K/8K (some games) is going to kill any 8GB card. You can set Ultra and then just go down the list and lower texture detail one or two levels to high. All data produced so far supports this conclusion.

Also do not confuse game engine bugs with VRAM. Assuming you have 32GB of system RAM, a 8GB card will have 24GB of display memory, I do not think any of those games require more then 24GB of display memory.

Games do not manage graphics VRAM, they don't even have access to it, they only can load into display memory. The API and WDDM 2 are what manage whichs graphics resource gets loaded into VRAM or system RAM. When your game loads a resource, generally textures. it's first loaded into system ram. When the framework detects you are about to reference that resource, it will transfer it into graphics VRAM across the PCIe bus, then change the segment mapping to reference it's new location. As you keep playing the API will keep loading referenced resources into graphics VRAM until it starts to run out. Once it starts getting lower it'll move the oldest resources back to system RAM.

Since your VRAM is now a giant graphics resource cache, cache misses require the graphics pipeline to either be paused while the framework transfers the resource over, or some engines allow the developers to use custom logic and just keep rendering without. This shouldn't be done and thankfully I think only a few games do it. Once we understand what's happening under the hood, we can easily see VRAM starved situations as micro stutters, and it's a sliding scale that becomes very noticeable.

Now another note on VRAM, it contains more then just resources like textures, it also contains all the various buffers and work spaces. This is why programs can't manage VRAM directly, as buffers and workspaces are defined by the drivers and framework. Windows WDM itself use's about 200~300MB of VRAM, all the other buffers / shader memory take up another 1~2GB give leaving 5~6GB left for things like textures. DLSS itself eats up 1.5GB which would reduce that value to 3~4GB and then RT can eat up another 1~2+ GB depending how the game implements it. That is why if you attempt to do DLSS upscaling + MFG + RT + Ultra Legendary you can easily squeeze the "cache" space down to virtually nothing causing the game to have to basically run out of system RAM. Anyone wanting to push those features absolutely needs a 12GB card minimum, I would recommend anything with a real 192-bit memory interface.
The charts tell one story, but I'm being absolutely frank in saying that it's not the full story. I tried to explain that in the review text. Every time you encounter a major showstopper (like how about Indiana Jones just crashing to desktop at ultra settings with a VRAM allocation error), that's a huge deal. The 5060 Ti doesn't have a ton of compute, so that 16GB feels a bit overkill and I still wish we had the middle ground 12GB option using 3GB chips... but it has enough compute that 8GB becomes a massive concern. And the same undoubtedly applies to the 9060 XT 8GB. In fact, in my experience (e.g. RX 7600 XT vs RX 7600), AMD has even more issues than Nvidia when running 8GB.

Data doesn't lie, that data tells us where the cut off line is. What the data says is that for 1440p high or lower these entry level 8GB cards are fine. Understanding what the data is saying requires knowledge of the underlying technologies and memory segmentation and driver models is kinda esoteric to non-engineers. There is a saying, you see what you want to see. People who enter with a preconceived idea will subconsciously look for things to support that idea while simultaneously rejecting anything that might challenge that idea. Case in point is blaming engine bugs and game startup issues as "VRAM", which makes no sense because they are entire separate things. When you close a game, all it's associated assets are flushed from memory and the only thing remaining is whatever is inside windows file cache. It's like stubbing your toe and blaming the neighbors dog cause it was barking.

I really have to stress that anything with a 128-bit memory interface is going to be entry level. We're in a weird time because compute is starting to really outpace memory technology. My personal feeling is that the 9060 should be the typical 128-bit / 8GB (four chip) card and the 9060 XT a 192-bit 12GB (six chip) card to compliment the 9070's 256-bit 16GB (eight chip) card. AMD chose instead to only produce two dies, the exact opposite of nVidia. GDDR6 caps out at 16Gb per chip, GDDR7 is just starting to happen and is at 16Gb per chip with Samsung making some 24Gb GDDR7 modules that are just now starting to appear. You can't just stick GDDR7 chips into a bus built for GDDR6 for the same reason you can't plug DDR5 modules into a DDR4 CPU, so AMD would have to create a new die and product line, possibly 9065 / 9075 refresh next year.
 
This is more an addendum about clamshell mode and why it's rather distasteful as a solution.

A single GDDRAM chip use's two 16-bit memory interfaces to accept two 16-bit requests per cycle. Unlike system RAM where all the chips are daisy chained such that only one chip per 32-bit channel can be active at once, graphics RAM is run parallel which each graphics channel being able to handle two memory commands. Your total memory requests per cycle is then number of chips * 2, and this is important because graphics processing is highly parallel, those thousands of "graphics cores" are all making read/write requests nonstop. Something like the 5090 with it's 512-bit bus use's sixteen chips for 32 simultaneous memory commands per cycle. Something like the 9060 with it's 128-bit bus would use four chips for 8 simultaneous memory commands.

What clamshell modes does is physically remove the wiring for that command channel from a chip so that two chips can share the same memory channel. Instead of four chips for eight simultaneous commands, we get eight chips for eight simultaneous commands. You are only getting half the graphics memory performance that you paid for. As a solution it was never meant for the budget / entry level space and instead was something to be used for high end workstation or datacenter GPU's that need vastly more memory capacity.
 
still waiting for mine to come in. UPS had some kind of shipping delay but it's expected Tuesday.

now i just need to find and buy the ultrawide monitor to go with it. been wanting one and finally got a gpu that can handle it. :)

was thinking 1080p ultrawide but seems like there are not very many of them. but 1440p ones seem to be everywhere. :spamafote: