By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Ruler said:
Pemalite said: 

False.

Ahh yes 8 core is 300$, 12 and 16 core are 800$.

Clearly 8-core processors are not $300. *Cough*Xbox One*Cough*Playstation 4*Cough*Dozens of ARM CPU's*Cough*.

You know what makes CPU's cost money? Die size. Jaguar is 3.1mm2 per core.
8x3.1mm2 = 24.8mm2.
https://en.wikipedia.org/wiki/Jaguar_(microarchitecture)

The Xbox One's SoC was 363mm2, meaning that the 8x CPU cores are taking up 6.8% of the die space on a 28nm Xbox One chip.
https://www.extremetech.com/gaming/171735-xbox-one-apu-reverse-engineered-reveals-sram-as-the-reason-for-small-gpu

16x Jaguar cores would still be a small cost in the grand scheme of things.


Ruler said:

By who? Third parties who want save money where  ever they can? And yet they sare still so greedy and put microtransactions into their games for this generation

Everyone. Even the creators, Sony, IBM and Toshiba abandoned Cell and went ARM and x86 for everything.

https://web.archive.org/web/20091031112643/http://www.hpcwire.com/features/Will-Roadrunner-Be-the-Cells-Last-Hurrah-66707892.html


Ruler said:

Most developers arent programming that way tough otherwise we already would have had 60fps games on consoles. There are a view games designed that way like Knack 1 and Knack 2. Evil Within 1 and 2 maybe too, dont know.

Consoles do have a ton of 60fps games.
Doom, Wolfenstein, Halo 5, Overwatch, Call of Duty, Battlefield, Xenoverse... List goes on.





Again. It is up to the developer and whether they wish to chase 60fps. Not all games will be bottlenecked by the CPU.

Ruler said:

Well it would be up to the developers as the Cell can render almost anything. Rendering shadows and real time lighting would be a good use. Also calculation of destructible environments.

The Super Nintendo's CPU could render almost anything. It really depends on how many centuries you wish to wait.

Jaguar does 4 double precision (3 is more likely) flops per cycle.
https://en.wikipedia.org/wiki/FLOPS

3 * 1,600mhz * 8 cores = 38.4Gflop in double precision.
When AVX is in play... Jaguar can do 16 single precision flops per cycle.

16 * 1,600 * 8 = 204Gflop.

Cell reaches 20.8Gflops in Double Precision, making Jaguar 84% faster than Cell.
Cell reaches 230Gflop in Single Precision making it 12.8% faster than a 1.6Ghz Jaguar.

https://en.wikipedia.org/wiki/Cell_(microprocessor)#Architecture
https://en.wikipedia.org/wiki/PlayStation_3_technical_specifications

But what about the CPU in the Xbox One X? That is clocked at 2.3Ghz... Meaning...
3 * 2300 * 8 = 55.2Gflop.
16 * 2300 * 8 = 294.4Gflop.

Meaning that 8-core Jaguar is 165% faster in double precision and 78% faster in single precision.

Meaning that for rendering, Jaguar soundly beats Cell.

Ruler said:

Yeah because the optimisation is bad, AMD has the most powerful hardware right now with vega 64 performing @12tf, but its still struggling to outperform an 1080 ti with 10tf

Fucking bullshit. It has nothing to do with optimization.
You are still clinging to the false idea that Flops is the only thing that is important to rendering a game.

Clearly I need to school you again here.

The Radeon 5870 has 2720Gflops. (1600 pipelines * 850mhz clock * 2 instructions per clock)
The Radeon 7850 has 1761Gflops (1024 Pipelines * 860mhz clock * 2 instructions per clock.)

https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units#Radeon_HD_5000_Series
https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units#Radeon_HD_7000_Series

You would think the Radeon 5870 would beat the Radeon 7850 right? It has more flops doesn't it?

But guess what? It's slower.
https://www.anandtech.com/bench/product/1062?vs=1076


Bet'cha wish I would stop using this thing called evidence now, huh?

Fact of the matter is, for rendering there are other aspects of a GPU that is important.

Like texture mapping units which is responsible for resizing, rotating and sampling of textures.
https://en.wikipedia.org/wiki/Texture_mapping_unit

Or Render Output Pipelines.
https://en.wikipedia.org/wiki/Render_output_unit

Geometry units, bandwidth, memory capacity, compression capabilities and so on are all part of the GPU as well, which flops doesn't account for.

AMD GPU's perform the way they do because AMD hasn't invested in these aspects to the same degree nVidia does with it's GPU's, AMD for instance is notorious for having poorer geometry capability compared to nVidia.

https://www.anandtech.com/show/8460/amd-radeon-r9-285-review/3

Ruler said:

AA is a GPU thing, the CPU isnt the bottleneck for that.

False. Anti-Aliasing can be done on the GPU or CPU.

https://software.intel.com/en-us/blogs/2011/07/18/cpu-morphological-antialiasing-mlaa-sample-now-live


Ruler said:

Yeah but you know what the CPU is? A 3 core 1.2 Ghz PowerPC, thats weaker than even the Xeon inside the XBox 360. Yet look at the games that still came on the system in this year.

Difference between us is that I know the tricks they used to achieve what they did.

Ruler said:

Obviously the Wii U has other stuff like more RAM and a faster GPU but still, its impressive for a 1.2 Ghz CPU. And you know most Nintendo games run on 60fps.

The WiiU's CPU is an Out-Of-Order design, which gives it a significant leg up on the In-Order PowerPC designs found in the Xbox 360 and Playstation 3, it does cost extra transistors, but it is worth it if you can afford it.

https://en.wikipedia.org/wiki/Out-of-order_execution
https://arstechnica.com/gaming/2012/11/why-you-cant-read-too-much-into-the-wii-us-slow-clock-speed/

Clockspeed isn't everything, that debate ended when the Pentium 4 came along.


Ruler said:

Its a benchmark from 2005-2007 from the PS2 era, what have you excepted? but there is even more

You missed the point clearly.
The point was that a Pentium 2 300mhz from 1998, with a fraction of the performance, 9~ years before the Cell came along in the Playstation 3... Was rendering 3D graphics.
It's not an exclusive feature to the Cell, ALL CPU's can render graphics.

Ruler said:

Yes it does, it enables the highest speed for renderings that need it.

No. Ram has zero processing capability. You need to prove otherwise as the burden of proof lays with you.


Ruler said:

But some remasters show that they are struggling with the Jaguar, as they still cap the framerate to 30fps, like Skyrim, Eizio collection or Resident Evil Remastered.

That's just poor porting. Don't forget either that those remasters tend to operate at higher resolutions and usually have other enhancements such as better lighting, shadowing, draw distances and so on.

That's not all free on hardware you know.



Ruler said:


And i am pretty sure there are more ports from last gen, and even completely new remasters like the Crashbandicot cant escape rendering in 30fps. Its pretty evident at this point that the Jaguars dont make a difference.

Doesn't help that they are framerate capped.
It's not as simple as just unlocking the framerate to hit 60fps.

Scripting and animations are often tied to framerate on consoles, which has a ton of caveats.



Doesn't mean Jaguar is bad, just that is what the developers chose.

Ruler said:

First of that is a 2000$ CPU.

The cost of a CPU is ultimately redundant and doesn't change how many cores a game will fundamentally utilize.

I have a 6 Core/12 thread processor and PUBG on PC, wish for me to show you some benchmarks of core scaling?
Or was the prior evidence that I provided enough to satiate your opinion?

Ruler said:

Second, going from 88 to 141 fps is only a 40% increase despite adding 3 times the cores , so that is showing that more cores for this games isnt doing much.

Doesn't change the fact there is still gains thanks to extra CPU cores.
You are arguing against evidence.

Either pony up the evidence or take a hike.

Ruler said:

Second this 2000$ CPU you are showcasing is cloacked at 3.3 Ghz just like Cell Processor.

Clockspeed is not everything.

There is a reason why people aren't using a 3.8ghz Pentium 4 in 2017, because it's a piece of crap.
https://www.anandtech.com/bench/product/92?vs=118

Ruler said:

All of these RAM arent available in 2019 and probably cost more than XDR2 Ram. Provide evidence that DDR2 can be faster than GDRR5 or XDR2 if you say it.

I never said all of them were available. GDDR6 will be coming within the next 6 months.

But HBM2, HBM and GDDR5X most certainly are.
https://www.anandtech.com/show/9421/the-amd-radeon-r9-fury-review-feat-sapphire-asus
https://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review
https://www.anandtech.com/show/11262/asus-launches-geforce-gtx-1080-11-gbps-and-1060-9-gbps

As for DDR2 being faster than GDDR5. That is just simple mathematics.
(Memory Clock * Bus Width /8) x Memory Clock Multiplier. (I.E. GDDR5 is 4x, DDR2 is 2x.)

Thus DDR2 @ (800mhz) 1600mhz on a 128-bit bus would have identical bandwidth to GDDR5 @ (400mhz) 1600mhz on a 64bit bus.
Increase the DDR2's bus width to 256-bit and it would be faster than GDDR5.


Ruler said:

 https://www.youtube.com/watch?v=ZcF36_qMd8M

I said I demand evidence. Not a video of someone else with an opinion that aligns to your own.

Ruler said:


They are already running on AMD hardware inside the PS4 and PS4 Pro and look artisticley and technically better than anything on PC.

Technically? Heck no.
StarCitizen is technically ahead of anything on the consoles, it's sheer scale is a testament to that very fact.





Artistically is completely personal opinion and will thus change depending on the individual, thus making it a completely useless and stupid argument.


Ruler said:

On PC its also the case but its also needed often times in order to have something playable in the first place.

Bullshit.
I have 640 games on Steam. If games being released in an unplayable state was the norm... I would know about it.
http://steamcommunity.com/id/Pemalite/

Ruler said:

But the difference between PC and consoles is that you have a variety of Graphicscards and processors, sure you get a patch that benefits the latest graphicscard and CPUs but not the older ones most of the the times.

False. Case in point AMD's frame pacing driver which even benefited completely different VLIW architectures.

http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/62439-amd-13-8-frame-pacing-driver-tested-benchmarked.html

Ruler said:


But regardless of all that it doesnt anser my main point that AMD hardware for both CPUs and GPUs runs worse than Intel or Nvidia.

AMD may not have the performance crown, but that doesn't mean their hardware is shit.

There is no such thing as a bad GPU/CPU, only a bad price. People still bought AMD FX CPU's despite the terrible performance those chips offered. Why? Price.

Ruler said:

They have the strongest GPU on the market and it still runs worse than Nvidias like i have pointed out above. Its not an issue with consoles, if Nvidia and Intel are really so much better at their job you would have seen Nvidia or Intel hardware inside consoles.

Vega isn't the strongest GPU on the market, for the reasons (and evidence) that I alluded to prior.


Hynad said:


Really... How do you keep the will to reply?

Have you not paid attention over the years? I live for debating tech. :P

Hilarious when someone makes a ludicrous comment and I get bombarded on Steam, Private Message and so on with people asking me to interject.



--::{PC Gaming Master Race}::--