Unfortunately, figuring out what graphics card to buy can be intimidating. Not only do you have to choose between AMD and Nvidia, but you also have to make sense of all the specs and numbers floating around. It’s tough, especially if you’re new to the hobby. But worry not, as our guide will walk you through the process step-by-step. Let’s get started.
“GPU” and “graphics card” are often used interchangeably, but that’s technically incorrect. “GPU,” which stands for “graphics processing unit,” refers to the chip which AMD, Intel, or Nvidia manufacture. For example, the GeForce RTX 3060 Ti is an Nvidia GPU, while the Radeon RX 6700 XT is an AMD product.
“Graphics card,” on the other hand, refers to the total package that you purchase from an add-in-board (AIB) partner such as Gigabyte or Asus. This includes the GPU, cooling, video memory, and all the supporting circuitry that lets your system communicate with the GPU. Thus, the Asus TUF Gaming Nvidia GeForce RTX 3080 is a graphics card, not a GPU.
Step 1: AMD or Nvidia (or Intel)?
The past few years have seen AMD and Nvidia diverge significantly in the feature sets of their GPUs. No longer is it simply a matter of buying the fastest GPU you can afford. Instead, you have extra factors to consider, such as whether you want to enable ray tracing or which upscaling technology you want access to.
So, before you start digging into benchmarks, it’s a good idea to sit down and evaluate what features you want in a graphics card. This will help you decide which manufacturer to prioritize.
Ray tracing (often shortened to RT) is a complex beast, and this isn’t really the place to go in-depth into what it is or how it works. But to put it simply, it’s a technology that allows much more realistic reflections, shadows, and lighting in video games at the cost of increased demand on your CPU and GPU.
The results can sometimes be stunning, like in Metro Exodus: Enhanced Edition. The ray-traced lighting and global illumination are game-changers and make the game look like a product from the future.
Nvidia led the push for real-time ray tracing in video games with its RTX 20-series cards, so it’s no surprise that the company’s cards hold a noticeable lead in ray-traced performance compared to AMD and Intel. Nvidia GeForce cards have much more advanced dedicated ray-tracing hardware than AMD cards and thus perform noticeably better in heavy RT workloads.
For example, the Nvidia GeForce RTX 4080 significantly outperforms AMD’s RX 7900 XTX in most ray-traced workloads. Intel GPUs also have dedicated RT hardware, but the company is yet to release a true high-end card. So it remains to be seen how well its cards perform with high-end, ultra-quality RT workloads.
Admittedly, only some modern games support ray tracing and even fewer truly impress with their ray-traced visuals. Metro Exodus: Enhanced Edition is an exception, as are Control and Cyberpunk 2077. However, ray tracing isn’t always the game-changer that Nvidia might hope it is.
Despite that, the industry is moving towards ray tracing as the default rendering paradigm. Unreal Engine 5’s Nanite and Lumen technologies are designed to take advantage of hardware ray tracing (with a software fallback layer). At the same time, Cyberpunk 2077’s new “RT Overdrive” mode is a tantalizing glimpse at a fully ray- and path-traced future.
However, by the time ray tracing becomes the norm, you’ll likely have access to much more capable ray-tracing cards from all three manufacturers. In the here and now, ray-tracing prowess is welcome but far from necessary. I like it, but you may not feel the need for it.
If you like what you see from the example videos we’ve linked, you’ll want an Nvidia card for maximum performance in ray-traced scenarios. Intel GPUs are OK at ray tracing for the price, but their budget-minded design means they can’t deliver the pure ray-traced performance you can get from Nvidia GPUs.
On the other hand, if ray tracing isn’t something you’re interested in, then you’ll be fine with an AMD card.
High-Quality Upscaling: DLSS, FSR, and XeSS
Another exciting development in video game graphics technology is the rise of high-quality “smart” upscalers. The two most popular solutions are Nvidia’s Deep Learning Super Sampling (DLSS) and AMD’s FidelityFX Super Resolution (FSR). Intel’s Xe Super Sampling (XeSS) is also an option, but support is limited and will likely remain so until the company’s next generation of graphics cards.
These upscalers run the game at a lower internal resolution before upscaling the output to your monitor’s current resolution. This allows for higher framerates or better graphics settings, all without sacrificing too much in the way of visual quality and clarity. For example, DLSS can look as good as native resolution in a game like Death Stranding, if not slightly better:
While all three technologies aim for the same result, how they get there differs significantly. Nvidia and Intel use dedicated hardware on their graphics cards along with machine learning algorithms. In Nvidia’s case, this means DLSS is limited to RTX-capable cards and is unavailable on older GTX cards. In contrast, AMD’s FSR uses an open-source, hardware-agnostic method that is allegedly easier and quicker to implement into games.
While DLSS and FSR provide incredibly high-quality results compared to traditional forms of upscaling, Nvidia’s tech generally has the upper hand in image quality. DLSS is often sharper and has fewer issues with motion, resulting in a cleaner image overall.
Digital Foundry’s upscaling comparison in God of War shows this very well. Note how there’s shimmering and flickering around moving elements with FSR, which isn’t present with DLSS:
Another great example is Square Enix’s Forspoken. FSR has blockier, lower-resolution textures and rougher details than DLSS, especially around moving objects like the main character. While DLSS isn’t perfect, it offers a much more stable image than FSR in this title.
Forspoken is also one of the few big-budget titles supporting Intel’s XeSS. Intel’s upscaling tech performs admirably but has significant issues with depth-of-field effects in the game. It results in serious flickering issues when image elements are out of focus, such as on the main menu.
However, that’s more of an issue with Forspoken than with XeSS. Cyberpunk 2077 is a great example of what you can get when XeSS works well, offering DLSS-like visuals and a massive 70% performance boost paired with an Intel GPU.
Overall, Nvidia has a decisive advantage regarding high-quality upscaling. Intel’s XeSS can provide comparable results, but support is limited. AMD’s FSR is perfectly usable in most scenarios, but it does suffer from visual artifacts and generally offers worse visual quality than its rivals.
Upscaling will only get more important as game engines start cranking up image quality, so having access to as many upscaling methods as possible is ideal. Some games, like Remnant 2, even require upscaling to achieve the high framerates most gamers demand.
That provides another point in Nvidia’s favor. An Nvidia GeForce RTX GPU lets you DLSS, FSR, or XeSS, picking and choosing depending on which looks or performs better.
Nvidia introduced DLSS 3 frame generation (initially known as DLSS 3) with its RTX 40-series cards. DLSS 3 frame generation uses AI and dedicated hardware to generate artificial frames and boost performance at the cost of increased latency. Nvidia counteracts this latency increase by requiring DLSS 3 frame generation to implement its latency-reducing Reflex technology.
DLSS 3 frame generation support is still limited, but the results are impressive. DLSS 3 frame gen gives Spider-Man Remastered an extra 100 FPS when enabled at native 4K on an RTX 4090. This pushes it well beyond even the fastest 4K monitor available.
Cyberpunk 2077 also benefits greatly; enabling DLSS upscaling and Frame Generation gives the game an almost 400% boost in framerates vs. native 4K at max settings. The game holds up surprisingly well, too, with little in the way of image artifacts and inconsistencies that you may expect with AI-generated frames.
AMD has recently released its equivalent solution as part of FSR 3. AMD’s frame generation differs from Nvidia’s solution in that it doesn’t use custom hardware and, thus, works on all modern GPUs regardless of manufacturer. However, the complementary lag-reduction technology, Anti-Lag+, is currently only available for AMD 7000-series GPUs.
AMD’s FSR 3 frame generation has some teething problems that you’d expect from new technology, such as issues with adaptive sync (FreeSync and G-Sync) displays. It can also be stuttery without a frame rate cap, which is not an issue DLSS 3 frame generation has right now. But neither of these are that surprising considering how new the technology is. We expect AMD to iron out many of FSR 3 frame generation’s bugs and quirks over the next few months.
Other AI Enhancements
Nvidia GPUs have a welcome value-add in the form of AI-assisted audio and video enhancements through Nvidia Broadcast. Broadcast uses RTX GPU’s AI cores to process audio and video in real time, giving you access to features such as high-quality noise removal and automatic eye contact.
Broadcast’s noise removal is particularly impressive. The results rival professional solutions such as iZotope RX’s noise removal algorithms but work in real-time. So Broadcast’s noise removal is perfect for streaming or meetings where you can’t ensure a perfectly quiet environment.
And here’s a similar recording with Broadcast’s noise removal enabled:
Broadcast eliminates almost all keyboard noise while retaining my voice’s clarity. You don’t hear any of the watery noise removal artifacts you get from free post-process noise removal solutions (like Audacity):
As great as it is, we wouldn’t necessarily say that Nvidia Broadcast is worth buying an Nvidia card for all on its own. Still, it’s a welcome bonus that you can make great use of if you buy an RTX card.
Step 2: Consider Your Monitor
Now that you’ve decided whether to focus on AMD, Nvidia, or Intel, it’s time to start figuring out which specific GPUs you want. But before looking at benchmarks or scouring spec sheets, you should start with your monitor.
If you’re upgrading your PC and sticking with your current monitor, you’ll want a GPU that suits its resolution and refresh rate. The idea here is to utilize your monitor to its maximum without wasting money on extra performance (be it in resolution or framerate) that your monitor can’t display.
Suppose you have a 1080p monitor with a maximum refresh rate of 60 Hz. In that case, you should stick with affordable mainstream graphics cards like the Nvidia GeForce RTX 4060 or AMD Radeon RX 7600, even if you can afford to spend more on a graphics card.
These cards are more than enough for 1080p gaming at 60 FPS. Go higher, and you’ll end up with a card that’ll do 1080p 144 Hz comfortably when your monitor can only display 60 Hz. There’s nothing wrong with that, but it’s a waste of money if you’re not planning a monitor upgrade.
In contrast, if you have a 4K 120 Hz monitor and want to take advantage of the high refresh rate, you’ll want higher-end GPUs like the Nvidia GeForce RTX 4080 or AMD Radeon RX 7900 XTX. These have the power to hit around 100 FPS at 4K, especially with the aid of upscaling technologies. You’re not likely to hit a locked 4K120 all the time, but these cards will get you close.
Of course, this all goes out the window if you’re buying a new monitor alongside your graphics card. In that case, you can approach this the other way, buying a graphics card and then choosing a monitor that suits its performance.
Step 3: Determine Your Budget
If you’re building a whole new rig, then it’s also important to decide how much you want to spend on your graphics card. Graphics cards can range from $200 budget options up to $2000 monsters, so figuring this out spend will help you narrow down your choices.
When speccing out a whole new rig, your graphics card should generally be the most expensive part of your gaming PC’s total cost. While it’s impossible to set out a hard and fast rule, we recommend dedicating 35% to 50% of your total budget to your graphics card if you’re building a decent gaming rig.
We recommend you stick to the lower end of that scale for most mainstream builds. So if you’re building a $1000 PC, then you should spend around $350-400 on your graphics card.
Note that you shouldn’t overspend on your graphics card, especially not at the cost of your CPU. While the graphics card is the most critical part of your system for gaming performance, you’ll still need a decent CPU to feed all the required data to your GPU quickly. So getting the right balance is ideal.
For example, there’s no point going for a $1000+ Nvidia RTX 4080 if you’ll have to drop down to a $100 budget-class CPU like the Intel Core i3-12100F. Sure, the Core i3 will run your games fine, but you may find that it’ll hold the RTX 4080 back and create a CPU bottleneck in some games. Check out our guide to choosing a CPU if you want more info about selecting a CPU.
You’ll also have to consider which specific graphics cards you get. Some RTX 4060 Tis, like the Asus TUF Gaming GeForce RTX 4060 Ti OC Edition, retail above Nvidia’s MSRP. In return, you get higher default clock speeds and potentially better cooling solutions.
Most of the time, however, you’ll be fine with cheaper models. They’ll still run fine and will only be a few FPS slower at most. Temperatures also shouldn’t be a massive issue, as Nvidia and AMD have safeguards in place to ensure that their products don’t overheat.
Step 4: Specs to Consider
There are a ton of specs vying for your attention whenever you shop for graphics cards. From CUDA Cores and Stream Processors to clock speeds, from video RAM to TDP, wading through graphics card specs can quickly become overwhelming.
Thankfully, many of these specifications aren’t as important as they seem, especially when comparing different manufacturers or generations of graphics cards. So let’s start with the most important ones and work our way downward.
All three current GPU manufacturers use a similar naming system for their GPUs. You have a four- or three-digit (in the case of Intel) number that denotes the generation and the GPU’s place in the product stack.
The first number almost always refers to the generation, with bigger numbers indicating newer GPUs. The Nvidia RTX 4080, for example, is more recent than the RTX 3080, which is, in turn, newer than the RTX 2080.
The following two or three numbers denote the GPU’s place within its generation. The Nvidia RTX 4090 is a higher-tier product than the RTX 4080, which is itself more powerful than the RTX 4070. AMD GPUs, on the other hand, use a three-digit code; the RX 6900 is more powerful than the RX 6750, which is more powerful than the RX 6700.
AMD and Nvidia also use suffixes to denote higher- or lower-tier parts with the same number. Nvidia’s most regular suffix is the long-running “Ti,” which indicates a higher-performance part than a non-Ti card. So an RTX 3080 Ti is a (slightly) more performant GPU than the RTX 3080. Nvidia also has “Super,” which it last used on RTX 20-series cards like the RTX 2060 Super.
AMD is just as guilty of suffix abuse, with two potential suffixes for its latest GPUs. AMD’s latest high-end GPUs share the same number (7900) and differ only in their suffixes: the higher-end part is the 7900 XTX, while the lower-end part is the 7900 XT. However, other AMD 7000-series GPUs only come in XT and non-XT variations, so it’s not too bad.
Video Memory (or VRAM)
Your GPU uses its video memory (often abbreviated VRAM) to store data, such as textures, for ultra-fast access. More VRAM means more room for these textures, enabling you to either game at higher resolutions or crank the texture quality settings to the max.
Eight gigabytes is the minimum for modern graphics cards, although even that is starting to prove limiting in some (poorly optimized) AAA games. It’s still broadly fine for 1080p and 1440p gaming in 2023, but you may run into issues in the future.
More graphics card memory is always better and will give your purchase a bit of future-proofing. Having more VRAM is also crucial you want to use Nvidia’s DLSS 3 frame generation, as it can use up to two gigs of VRAM at 4K resolution. That said, you should consider the graphics card’s overall performance before deciding whether it’s the product for you. VRAM is important, but it’s not the be-all-end-all of gaming performance.
Your potential graphics card’s dimensions are also hugely important. Check the length and thickness of your card, and make sure that it fits in your PC case. Higher-end cards tend to be larger and thicker, with more substantial heatsinks to dissipate all the heat. Thus, they’ll run large and won’t fit in every case.
If you’re building in a compact Mini-ITX or HTPC case, you may need a low-profile graphics card to ensure it fits. Conversely, those building in roomy full-tower cases should be able to accommodate even the most monstrous graphics card without issue.
The last thing you want is to buy a graphics card only to realize it doesn’t fit in your case. So check and double-check before buying!
Power Draw and Power Connectors
A graphics card’s power draw is another vital compatibility spec to pay attention to before you buy a graphics card. Power draw, often expressed in TDP (Thermal Design Power) or TBP (Total Board Power), indicates how much power your graphics card will draw under load.
You want to ensure that your current power supply has enough wattage to power your new graphics card, CPU, and system components. The best way to do this is by plugging your present (and future) parts into a power supply calculator and seeing if your current PSU has enough wattage. If not, it’s either time to upgrade your PSU or drop down to a lower-power GPU.
Failing that, GPU manufacturers and their AIB partners often list recommended power supply wattages for their GPUs and graphics cards. These numbers are a good ballpark figure and will serve you well when determining if you have enough power for your desired GPU.
Another power-related spec to pay attention to is the power connectors your graphics card needs. Higher-end (and higher-power) cards will draw more power, requiring more PCIe power connectors. Some will even require entirely new connectors, like the 12VHPWR connectors on Nvidia’s RTX 40-series cards.
Ideally, you want your power supply to have enough PCIe connectors to power your card without needing adapters. You can use adapters (such as SATA to PCIe adapter cables) in a pinch, but they’re not ideal. We don’t recommend using them long-term.
Realize you need a new power supply? Check out our guide to choosing a PSU.
The last of what we’d consider the essential specs is the graphics card’s display output ports. If you have a modern monitor, this likely won’t be an issue: most cards ship with a combination of DisplayPort and HDMI ports, perfect for modern gaming monitors and TVs.
However, suppose you’re using an older monitor (or buying an older, lower-end graphics card). In that case, you’ll want to ensure your card has the correct port(s) for your monitor. This avoids any adapter-related headaches and will keep your setup clean.
CUDA Cores (Nvidia) and Compute Units (AMD) are the processing units present in your GPU. More is better, but they’re poor ways to evaluate graphics cards because the quantity is only relevant if you compare within the same brand and generation.
For example, comparing the CUDA Cores on an RTX 3050 vs. RTX 3060 may be useful, as they’re built on the same architecture. Here, the RTX 3060’s 3584 cores translate directly to improved performance vs. the RTX 3050’s 2560 cores. However, the comparison falls flat when pitting different generations against each other: the RTX 4060, for example, has fewer (3072) CUDA Cores than the RTX 3060 but offers around 20% better graphics performance.
The same goes for clock speeds. Different GPUs will run at different clock speeds, often with little correlation with performance. For example, the AMD Radeon RX 7600 has a maximum clock speed of 2655 MHz but is significantly slower than the RX 7900 XT and its 2400-MHz clocks.
Clock speeds only come into play when comparing different examples of the same GPU. Add-in board (AIB) partners ship certain graphics cards with factory overclocks, which can give these cards slightly improved performance over cards running at lower clock speeds. Even then, however, cock speed differences usually only translate to minor performance boosts. Thus, you can generally ignore clock speeds when shopping for a graphics card.
Step 5: Check Benchmarks
Now, we get to the fun part: benchmarks. Specs can only tell you so much, and benchmarks are where you find out which GPUs will truly run the games you want at the resolutions and framerates you demand.
Ideally, you want to compare benchmark results from the same reviewers. This ensures the testing methodology and rigs are the same, eliminating all variables except graphics card performance. You have many options, from YouTube channels such as Gamers Nexus to publications like Tom’s Hardware and TechPowerUp.
No matter which reviewers you go for, all you really need to do is to check the framerate numbers for the resolution(s) you’re interested in. Pay close attention to the 1% lows, which show the worst-case performance for the game and GPU. Average framerates are helpful but can be misleading and hide stutter; that’s where the 1% (and sometimes 0.1%) lows come into the picture. You want the low framerates to be as close to the average as possible.
Some reviewers will publish multi-game averages in their benchmarks. This can be a convenient way to get an overall idea of a card’s performance, balancing out the high and low outliers and helping set your expectations accordingly.
That said, you can also focus on specific benchmark results if you’re looking to boost framerates in a particular game. For example, you may be a competitive Rainbow Six: Siege player looking for a boost in framerates. In that case, you can look for reviewers that use the game in their testing and focus specifically on their test results.
Benchmarks can also help you choose specific graphics card models. Not all cooling solutions are created equal, and it’s often impossible to tell whether a card from one manufacturer will run as cool or as quiet as one from another AIB. That’s where temperature testing comes into the picture, as it can show you how effective (or not) a card’s cooling solution is.
Unfortunately, there are a ton of AIB cards out there, and not all will get the proper review treatment. So if you’re concerned about temperatures and noise, you’ll want to buy specific graphics cards with proven temperature and noise performance. That said, most modern graphics cards will perform adequately; this final step is only for those with exacting temperature or noise demands.
Hopefully by this point, you’ll have identified a graphics card that offers the best combination of raw performance and extra features for your budget. All that’s left now is to pull the trigger!
Knowing how to choose a graphics card is crucial, whether building your own rig or buying a pre-built computer. It determines how many frames and how much eye candy you can get from your games, so getting it right should be your main priority as a PC gamer.
Given the importance of getting the right graphics card (and GPU), it’s worth spending a lot of time evaluating your options and needs. After all, you don’t want to end up with an underpowered card that can’t deliver the performance or graphics you want.
Need some suggestions on where to start with graphics cards? Check out our list of the best 1080p 144 Hz graphics cards, many of which will also do a decent job at 1440p.