Kool but ill wait until we get official numbers from devs.
Kool but ill wait until we get official numbers from devs.
CGI-Quality said:
What's sad is that some people will come here and believe it. Truth is, just like with the PS3 vs 360, some people will have a hard time accepting that one of them is more powerful. Just so happens that if the person doesn't favor Sony, these types of articles crop up. |
Not sure what the story is for why people do this, outside of they believe their company of choice is going to give them a bunch of cookies for doing the spin. Sorry folks, the cake and cookies are a lie.
CGI-Quality said:
I don't get what you're saying, but I'm assuming you're making a claim that what I said is wrong. If so, how? |
I was talking about individuals like the original poster, who will post, and then defend strongly, their biased articles, for some reason, like they are getting a cookie from their company. They get nothing, outside of an ego stroking, by winning arguments. My comment was pointed at individuals you spoke about in the last sentence of your post.
BUT, if they want to argue they do get a cookie from their console of choice, for supporting it, here is that cookie:
"Sometimes I don't know how to explain things as well as you have."
Don't worry, to explain the comparisson of this thread you can also make a Sci-fi movie.
Two senteces:
1. Confirmed XB1 Dev
2. The facts are on paper, the PS4 has better specs and the most you can debate is by how much.
That's the end of this thread, but well... you guys can still playing with the unicorn.
See ya.
Zappykins said:
Thanks for bringing in your knowledge and experiences to this discussion. Sometimes I don't know how to explain things as well as you have. People, especially hear, seem to be confused as to how things work. It's much more complicated that just 8+1.2 or 8+1.8. When things are no longer linear, it is much harder for most to follow. But the multi play developers are having their own challenges, so we really don't know yet exactly how things will work out. As I said during the reveal, I am concerned that Sony may have a cell type blunder, again. The unified GDDR memory might work for them, but I have yet to be show from Sony that it is actually no relevance to that concern. They should should let the public have some game-play. But in another subject, just curious, what are you referring to with Microsoft abandoning the GDDR memory? Was this an actual product or just development? |
The Xbox 360 used GDDR3 memory in a unified memory architecture.
drkohler said:
I have no idea what you are trying to say here. What is "flushing memory before writing?" You do know that gddr memory adds a ton of features to accessing memory that are simply not available with ddr memory? This is why the controller for gddr memory is much more complex than your run-of-the-mill ddr controller, btw.. |
Typically, you read data from the HDD into a block of memory. The size of the block varies by memory, 256 bits(b) on GDDR5. So, if you have a file that's 4 bytes(B) in size you've wasted 28 B. But let's say you now want to do something with that data. Maybe it's an XML file that you keep user stats stored in. Normally, with DDR3 memory you would be able to append the memory. With GDDR5 you have to either 1) clear that block of memory than write the complete data to it, 2) write to a separate block of memory, then clearing out the original block of memory, or 3) use the HDD as a temporary write space, then read data back into memory after you've cleared out the previous data.
The problem with one is clock cycles. You just waste them. Problem with two is you use double the memory and you now need to keep track of two memory addresses, not one. The problem with three is the fact that going to the HDD is less efficient than just writing to memory.
Again, with graphical data you're either just reading it or completely transforming it. So, it either just sits there in memory (like a texture) or it completely replaces the previous data that was once there. But for use in an OS, there are far more instances where you're dealing with small files (my examples are extremely small) that simply need to be appended, not rewritten.
It's an inefficiency, but at this point the question is, does it really translate into a fundemental performance flaw. It would be interesting to see the same optimized code running on each console. If you could, then you'd see a true benchmark of both systems.
Adinnieken said: The Xbox 360 used GDDR3 memory |
<deleted>
drkohler said:
|
Yes, it does.
The Xbox 360 (not Xbox One) uses GDDR3 memory. It has and still does.
CGI-Quality said:
Ah...misread you, then. My bad. |
No problem. I know I can end up being combative at times, and argumentative, and my reasons fro posting are to counter. But now, I am trying to work on more of agreeing and extending things with people. I need practice in that area. On this note of what we are talking about, I am getting sickened by individuals who produce corporate propaganda they are not compensated for, as if it is a form of religious devotion. I did try to google and argument for why the ONE was competitive technically with the PS4, and had a hard time finding anything. But then, add the whole looks like a Saturn thing, where at launch the system is $100 more. Really? I get people defending this by saying, "But the ONE is far more feature rich than the PS4!" Is it really? The blasted Kinect camera and an improved ability to order a pizza is worth $100 more for specs that are seen as inferior? AT MOST, I can see someone, by some warped logic, could argue that the ONE is marginally superior, eventhough I am not sure how, BUT it is STILL $100 more expensive.
On a practical level, I have no room for any sort of camera to interact with. My area is too small here. And dropping $500 isn't up my alley either. I am down to an OUYA actually, for a new console to play, and trying old school stuff.
Adinnieken said:
Typically, you read data from the HDD into a block of memory. The size of the block varies by memory, 256 bits(b) on GDDR5. So, if you have a file that's 4 bytes(B) in size you've wasted 28 B. But let's say you now want to do something with that data. Maybe it's an XML file that you keep user stats stored in. Normally, with DDR3 memory you would be able to append the memory. With GDDR5 you have to either 1) clear that block of memory than write the complete data to it, 2) write to a separate block of memory, then clearing out the original block of memory, or 3) use the HDD as a temporary write space, then read data back into memory after you've cleared out the previous data. |
I'm sorry I still don't have the slightest clue about what you are trying to tell. "The size of the block varies by memory" I really have no clue...
When you read data from a disk file, there are basically two methods:a) read the file: read(filename,buffer) or b) blockread(filename,buffer[],NumBytes).The os doesn't give a shit about what type of memory you have, how wide the data bus is, or how the bytes get there: it just reads the bytes you want into the buffer you specify.
Here is a really odd example (I doubt any dev is ever doing something like this) why gddr5 i actually much better than ddr3: Suppose you have a variable array of 32 bytes, one core needs the odd bytes (1,3,5,7..), another core needs the even bytes (0,2,4,6,..). With gddr5, both cores can (pseudosimultaneously) read and write to this same variable array without interfering with each other!