cancel
Showing results for 
Search instead for 
Did you mean: 

Minimum Decent Setup for DK2?

alimasri
Honored Guest
What's the minimum acceptable specs for a PC & Graphics card to buy before getting the DK2? and I'm stressing on the Minimum here!
9 REPLIES 9

raidho36
Explorer
Good quesion. Well bare minimum, aside of other components, is a GPU that supports pixel shaders and would have DVI/HDMI output capable of substantional bandwidth. Any modern GPU meets first criteria, but for the latter - the CV1 gonna push the limits there, so you really need up to date card.

Right now there's only a handful of full-scale games, those would have adequate graphics settings - in these games you can always find a setting that would fit all necessary textures in the memory available and run 100% resolution in high framerate on any modern GPU. Unity demos, however, tend to have NO settings whatsoever (the "graphics quality" option in launcher window doesn't do shit). With CV1 release amount of full games will grow vastly.

It is also a good idea to pick a processor with high per-core operating frequency, due to the fact that most of the processing tasks in games don't scale well or even don't scale at all (you can't use multiple processors (cores) to do them faster) so the more single core can do per second the better, and total amount of cores is mostly just irrelevant, anything higher than just one will do. Modern games tend to use processor a lot. And what's real bitch about it is that if the game couldn't do it's shit in time on the CPU, the framerate will drop even if GPU chews it's data like it's nothing. There's ways to avoid that, but most developers just don't care.

In terms of RAM, that really depends on the game you want to play, but it's dirt cheap so you could just stock up to the max on it for something like $50. In reality you wouldn't need more than 4 GB, and even if you do, you can literally save your lunch money to get extra, it's peanuts.

Anything else doesn't pays any role.

Anonymous
Not applicable
"raidho36" wrote:
Good quesion. Well bare minimum, aside of other components, is a GPU that supports pixel shaders and would have DVI/HDMI output capable of substantional bandwidth. Any modern GPU meets first criteria, but for the latter - the CV1 gonna push the limits there, so you really need up to date card.

Right now there's only a handful of full-scale games, those would have adequate graphics settings - in these games you can always find a setting that would fit all necessary textures in the memory available and run 100% resolution in high framerate on any modern GPU. Unity demos, however, tend to have NO settings whatsoever (the "graphics quality" option in launcher window doesn't do shit). With CV1 release amount of full games will grow vastly.

It is also a good idea to pick a processor with high per-core operating frequency, due to the fact that most of the processing tasks in games don't scale well or even don't scale at all (you can't use multiple processors (cores) to do them faster) so the more single core can do per second the better, and total amount of cores is mostly just irrelevant, anything higher than just one will do. Modern games tend to use processor a lot. And what's real bitch about it is that if the game couldn't do it's shit in time on the CPU, the framerate will drop even if GPU chews it's data like it's nothing. There's ways to avoid that, but most developers just don't care.

In terms of RAM, that really depends on the game you want to play, but it's dirt cheap so you could just stock up to the max on it for something like $50. In reality you wouldn't need more than 4 GB, and even if you do, you can literally save your lunch money to get extra, it's peanuts.

Anything else doesn't pays any role.



I really hate to be "that guy". But from someone who works in IT and builds high end custom gaming rigs and workstations, I dislike posts like this. You're offering advice and trying to help, which is absolutely great, but it doesn't really seem like you know what you're on about.

First off, a GPU that supports pixel shaders? As in pixel shader version? Which version in particular are you referencing (which is largely irrelevant to be honest as any modern GPU will support the latest versions) or do you mean it in the hardware sense? In which case only much older GPU's had separate pixel/vertex shaders. Nowadays they're unified and are called "stream processors". Regardless, it's not something that's relevant when recommending a GPU.

Also I respectfully disagree with your information regarding CPU's. While it's true that most games benefit from higher single core frequencies rather than lower speeds with more cores, that recommendation only stretches so far, and definitely not to the point where a fast (in the ghz sense) single core CPU would be adequate. And it is far too much of a sweeping statement to be helpful. The CPU's IPC (instructions per clock) plays a huge role in this, where a CPU running at 3.2Ghz with a low IPC would be far slower than a CPU running at 2.6Ghz with a higher IPC - heck, remember the netburst Pentium 4's?? For example with modern AMD CPUs compared to Intel CPUs, AMDs IPC is lower than that of Intels, which AMD makes up with more cores (which helps with multithreaded tasks such as video rendering etc, not so much games). In CPU limited games a 3.4Ghz Intel Core i3 Sandy/Ivybridge/Haswell (dual core with hyperthreading to offer 4 logical threads/cores) would outperform a higher clocked AMD Quad/Six/Octo core CPU.

There are also a lot of games which benefit massively from a quad core over a dual/single core. In fact unless you're on a tight budget you definitely want to be aiming for a quad core for a gaming rig.

However, you're correct that if the CPU can't keep up, the frame rate will drop even if you had a incredibly powerful GPU. This is what being CPU-limited is, the CPU is the bottleneck and the GPU has to wait for the CPU. It's not that developers don't care, it depends on a large number of variables, some of which developers just can't account for. For example, higher resolutions + graphic settings put far more load on the GPU, however, if you lowered the resolution and graphic settings, the load on the GPU would lessen to the point where the CPU becomes the bottleneck.

4gb of RAM in 2014 will not cut it to be honest. Again, a large number of current day games show huge benefits from having more than 4gb of RAM (Battlefield 3/4 being the best examples off the top of my head). Again, aim for 8gb for a gaming rig.

This is in by no means a personal attack to you, but please bare in mind when someone comes to a forum asking for help, you need to be bang on and concise with your advice and information, otherwise you can just end up causing further confusion.


In response to the OP, while it's hard to say as a minimum rig that can provide a comfortable playing experience is fairly subjective. This is what I would personally consider a minimum, while maintaining a comfortable framerate without having to skimp out massively on the graphical settings:

- Intel Core i3 Sandy/Ivy/Haswell ~3.2Ghz+ (Core i5 preferable)
- 8Gb RAM
- Nvidia GTX 660/AMD HD7870

Though with the Rift especially, rendering SBS 3D @ 960x1080 is rather intensive, so pooling as much money as you can into the GPU would be a wise idea.

I hope this helps.

raidho36
Explorer
"Cyph3r" wrote:
tl;dr

Yes, just "pixel shaders". Because any pixel shader version could to barrel distortion, with exception to the most ancient ones, which IIRC were just "shaders".

Yes I know that just one core isn't enough for all the tasks the game would pull, this is why I said "anything higher than just one". Any modern game would spawn a bunch of threads specifically to make use of several CPUs available. However, some games have these workhorse threads that would eat nearly 100% of the core, while other threads are barely active. And this is why it's benefitical to have higher CPU frequency, and number of cores above 2 practically doesn't makes a difference most the time. The only program that could eat up all 4 of my CPU cores 100% is zip archiver, but no game have pulled that off so far - the worst case I've observed is 2 massive worker threads that eat up 2 cores and shitload of idling helper threads with total activity below 10%.

There's always a way to optimize the graphics to make it run sick scenes in very high framerate. The biggest in terms of effort/results is VBO - upload all of your geometry and render in one call, getting rendered in record time. But then there's threshold beyond which brute-force rendering becomes slow, so you have to cull out some shit, and you have to do that on CPU. You still have your geometry uploaded and it's still rendered fast, but you have to make an individual render call for every batch of objects. You can batch up large chunks of geometry which really helps increasing culling speed and decreasing number of calls, but most devs just put every single little piece of shit as a separate mesh, which results in ungodly performance *coughETS2cough*. This is where Mantle comes in handy, but it isn't there yet. And of course there's things like LODding and whatnot. Speaking of processors, there's a lot of factors to performance, not just frequency and ICP, with some of them being much more important than those two - bigger L1 and L2 cache can make very big difference by saving fetch time, for one. But overall it's just a simple way to compare.

I don't know any game besides Minecraft that would actually benefit from having more than 4 gigs of RAM. As I said, you could just stock up to the max because it's a cheap part.

Your options there are much like mid-end gaming machine, not a bare minimum. But obviously the more powerful machine you got the better, and there's no upper limit.

Anonymous
Not applicable
"raidho36" wrote:
/snip


"Just pixel shaders"? Again you're getting confused, before unified stream processors came into play there were pixel and vertex shaders on the GPU which handled their own tasks. But this is no longer the case, and again is an irrelevant measure and would mean absolutely nothing to someone asking for advice.

Again, "any number of cores above 2 practically doesn't make a difference." I'm sorry but this is incorrect. I shall give one example now (though if you require any more, there's about 50,000 to choose from). Here is a CPU comparison in Battlefield 4:



Notice how as long as the CPU has 4 cores/threads there is fairly even performance across the board, but as soon as it drops to the tri/dual core CPU's there is a huge negative performance impact:

3Ghz Athlon Quad Core: 88fps
3.3Ghz Athlon Dual Core: 43fps

Literally half the FPS. And of course there is more to CPU performance to just core count, clock speed, IPC, amounts of cache, I know that, that was my point to you. So its unusual you're making that statement as a refute when in fact it just contradicts your first post.

There's always a way to optimize the graphics to make it run sick scenes in very high framerate. The biggest in terms of effort/results is VBO - upload all of your geometry and render in one call, getting rendered in record time. But then there's threshold beyond which brute-force rendering becomes slow, so you have to cull out some shit, and you have to do that on CPU. You still have your geometry uploaded and it's still rendered fast, but you have to make an individual render call for every batch of objects. You can batch up large chunks of geometry which really helps increasing culling speed and decreasing number of calls, but most devs just put every single little piece of shit as a separate mesh, which results in ungodly performance *coughETS2cough*. This is where Mantle comes in handy, but it isn't there yet. And of course there's things like LODding and whatnot.


It's clear you have a somewhat vague understanding of what you're saying however, it's all very general and you just seem to be throwing as many buzzwords as possible into a single paragraph. I do a lot of freelance work as a 3D artist and I understand the graphics side of performance in regards to LODs, meshes and texturing, and you make a very large sweeping statement to say that most devs handle geometry in a certain way, which however isn't the point, there are pro's and con's of all methods but ultimately is irrelevant to the debate of minimum hardware required to provide a comfortable playing experience with the Rift. (Though of course if you wish to have a discussion regarding the ins and outs of performance optimization methods in games feel free to contact me).

Also again, as with your argument about CPU's, there are plenty of games which benefit from having more than 4Gb of system RAM, not just Minecraft. As I said the Battlefield series is an excellent example of this (and is infact the main reason I upgraded from 4Gb myself). While the FPS wasn't hugely impacted with only 4Gb of RAM, there would be large stutters when the game was required to swap data in and out of the RAM to the pagefile. Upgrading to 8Gb cured this issue. It's widely documented, and not just with Battlefield, so feel free to hit up Google for 20 minutes.

I agree that my options are more in like with that of a budget mid range gaming rig, though as a I said, rendering SBS 960x1080 can be quite a demanding task, unless you really tank on the graphics settings. And I don't know about you, but personally, I would not want to play all my games and near the lowest graphical settings to be able to reach a solid 75fps @ SBS 960x1080. With the options I gave, it should offer at least some leeway to comfortably raise the graphical settings on most games, though definitely not all.

Lane
Honored Guest
Something like this is the minspec.

You'll be running on low quality settings and would benefit the most from an upgraded GPU.

Anonymous
Not applicable
"Lane" wrote:
Something like this is the minspec.

You'll be running on low quality settings and would benefit the most from an upgraded GPU.


Bingo. 🙂

raidho36
Explorer
"Cyph3r" wrote:

Well BF4 seems to do a good job at multithreading, that's really rare to see. Must be using extra threads for rendering. However seeing significant framerate drop on a CPU that doesn't pull 10 GHz total processing power doesn't make a good impression either. I mean it's not a Dwarf Fortress, what could a first person shooter possibly do to eat up that much processing power. Can't be physics - it's done on GPU.

And of course I know that there's different pipeline since certain period and those are no longer even shaders as we knew it. What kind of difference does it makes in regard of being able to do a barrel distortion? Older cards indeed have pixel shaders actually being pixel shaders. So what's your point even.

What's with "accusing" me of using buzzwords? VBO ain't a buzzword, it's a term for GPU feature. It's for storing your vertex data on the GPU alike to texel data. Over time GPUs became so fast they could render geometry faster than you can supply it, so they came up with a method to store vertices on the GPU as well. Of course my talk is general, you didn't expected me to lay out detailed essays on every single tangential topic now did you. And what for anyway, 80% of people wouldn't know what I'm talking about and the rest 20% know it already, I just threw in some basic examples. I've exaggerated my estimate of developers practices adoptation share, but if most of my games have FPS drops not because my settings are too high and my GPU is choking on it, but because the game would clog up 1 out of 4 CPU cores while other three are almost free and while the GPU runs in 15% load, there's definitely something wrong with game programming. Seriously, data compression process would grind 40 megabytes per second through series complex functions, what could possibly a game do at a similar computational complexity scale? Unless it's a Dwarf Fortress of course, with this game you could tell really easily where every single CPU cycle goes. But of course I know why it's that. It's because modern frameworks like to use memory a lot because there's so much available anyway. But using more memory on a regular basis decreases chances of a processor cache hit, in which case it would require RAM fetch, which takes bloody forever and during that the whole conveyor is in stall.

I know what more RAM is for, I just haven't seen games actually would need that much of it, except Minecraft. I don't play EA games so I must be missed that out. I have 8 GB installed as well, but it doesn't help too much against paging because in Windows it's overly aggressive and would shove everything into page file regardless of free space available in the RAM. I can't disable paging because Minecraft would crash shortly after start, this son of a bitch really eats RAM for breakfast. It isn't an issue but it is an annoyance, Linux has it much better, and it also uses free RAM for various buffers and such, that also improves OS performance a little.

I also don't think playing VR on lowest is a good idea, but if you want higher settings your card must be able to pull that. So you could just as well simply recommend a high-end rig. In terms of low-end, that's I think is more about getting the cheapest devices, and there is a bunch of pieces that work much greater than they are cost - I know there is but I'm not an expert.

kernow
Heroic Explorer
Whenever upgrading, I always run over to http://www.cpubenchmark.net (PassMark) first to compare benchmarks and prices (they track both for CPUs, video cards, memory). I start by looking at the high end chart of what I am looking for, find the highest performing models that are within my price range, and then I run over to amazon or some other website, and compare the specs between the models I was interested in, and give a glance at the buyer reviews.

It's a little bit of cheating, I guess, but it seems to work out very well, and I don't have to spend nearly as much time researching. I also tend to refrain from upgrading until the items I am interested (and within my budget) are about twice as fast as what I have (though, i don't stick to that all the time).

mdrejhon
Protege
For blur free low-persistence operation, you ideally want framerates matching refresh rates, e.g. 72fps @ 72Hz or 75fps @ 75Hz. Otherwise, you get the double-image effect (like CRT 30fps@60Hz) and/or amplified stutters (CRT always looked more stuttery than LCD at same Hz, due to lack of motion blur).

Low-persistence at proper full frame rate is AMAZING, so you REALLY want to optimize for 75fps operation in the games you plan to play DK2 with.