By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - What DirectX is, and why you will not see it in a non-Microsoft machine.

 

...


Except that, with having hit the practical limit of clock rates, about the only way to increase processing power for general use is to increase the number of cores/threads. With it being remarkably impractical to expect programmers to be explicit with creating distributed programs with an arbitrary number of threads, and C++ being an awful language for a compiler to implicitly distribute a program accross an arbitrary number of threads, C++ is a relic that is not well suited to modern programming architectures; and as time goes on this is only going to become more clear to everyone.

This doesn't mean that C++ will disappear overnight, and we will probably see new C++ programs started for years and there will be tons of C++ work on legacy systems for decades, but I see C++ being phased out much the way C was before it.

Core IPC and clockspeed can be increased on phones/tablets for at least the next five years, they are nowhere near the limit. For example ARM was A8 two years ago, A9 now and A15 next year, single core performance increasing hugely each time and it will continue. We're at about 1GHz and the limit is ~5GHz.

Intel IS near the limit but even then desktop thread count will have been constant from Nehalem (2009) through to Ivy Bridge (2013) and more CPU power is not needed to make better games on those platforms.



Around the Network
fillet said:
..

You sound like a student who found something out and thought that nobody else knew what they know now, but in reality everybody knew and just didn't care. DirectX and equivalent technologies are a nessecity in todays more complex games versus 15-20 years ago where the unification wasn't an obstruction to efficent development.

Your comments sound almost bitter and anti-microsoft more than anything else. OpenGL nowadays is the linux of operations sytems Vs DirectX being the windows of operating systems. Much as we all love linux, for an alround effective setup that can do everything without spending a whole lifetime working it out, linux is next to useless and over time becoming even more irrelevent, just like OpenGL, it just hasn't kept up. Yes I know OpenGL is only for graphics but you get my point.

Linux is only good for XBMC, everybody who isn't a free software bigot knows that. DirectX -> OpenGL, everyone who isn't an anti microsoft bigot knows that too.

Your reasoning makes sense only on the most superficial level, when you dig deeper into your work and research you'll be embarresed at the post you have made.

 

No offense / :)

I don't know where you get that I'm anti-microsoft from that. (I'm not they had me try Linux and Unix in school and I didn't really like it; even though I can see were some do.) I made this post after reading some posts on this site about users wanting DirectX in the next consoles, which isn't possible without also using Windows as the OS. I also only touched superficially on this because not everyone is a programmer, so getting more technical is a waste of time.

As to what other users have said, yes I know DirectX isn't a programming language (hence the quotation marks), it is a set of liberies that act as encapsulation for the set up and processing of graphics, sound and I/O. DirectX can be compaired to C++'s STL (Standard Template Liberies) in that they are set up to make things easier on the programmer. But there are time when you need to overload their templates to get the desired results. (There are also times when you would want to override them)



Dat dare new XBOX running DirectX 6.0 is teh shiz top o da line grfx pwn wiiu and ps4 with its rage 3d grfx



fillet said:
BlkPaladin said:

When I read comments that some people who want DirectX in this or that console it make me want to tear the little hair I have left out of my head. And it is apparent that these people have no idea of what  DirectX is and what it does.

First off before I delve into some more semi-technical aspects of what it does. I will go into what it is. DirectX is a wrapper “language” that helps developers make graphic intensive programs for the WINDOWS platform. (aka anything that runs Windows in some form) So right there you can rule out DirectX ever being used for any Sony or Nintendo product, these companies make their own proprietary OS for their systems.

Now what is a wrapper programming language and what does it do for programmers. On a programming aspect you can view a computer system that runs windows as a building with many different stories. It runs from Machine level (dealing with custom programming each and every single piece of hardware in a computer to do what you want it to) to the various API layers of Windows.

Windows is a system comprised of many layers that separates programs from the hardware layer. This helps in that you can use the generic virtual drives that Windows provides so you do not have to worry about optimizing your code to run on a specific device. The down side to it is that sometimes the layers have trouble communicating with each other and trigger the Blue Screen of Death.

What DirectX does is add another layer to this and deals with communicating with these generic drivers and messaging systems so the programmer and concentrate one make their program. This gives developers a easy entrance into programming for Windows, the down side is you sacrifice power and optimization (and in some part stability) for ease.

So DirectX in console is basically a step that allows programmers the ability to quickly make code for the Xbox. But as time goes by the DirectX portions of the engines get replaced in favor of machine code that allows the engine to run faster take up less RAM and do amazing things that just are not possible with DirectX. They may still use DirectX for items such as I/O functions since these tend to be closer to the machine level, and are optimized fairly well. (No need to reinvent the wheel.)

Nintendo and Sony do use a wrapper language for their systems, it comes in the form of the open source OpenGL/AL formats since these do not need to talk to the Windows API layers, which allows easier porting to various other platforms. (Though the OpenGL/AL in these are optimized and trimmed down for use with the systems it is intended for.)

Well that is all I can say for now. I was working on this for a book I'm writting to try and combat my major student loan issues. So I'm gathering what I learned about game development and presenting it for those who want to use it as a hobby and do it as inexpensivly as possible. (Unlike me who got taken by a "private technical collage" who took all my financing and now the creditor want their money...)

You sound like a student who found something out and thought that nobody else knew what they know now, but in reality everybody knew and just didn't care. DirectX and equivalent technologies are a nessecity in todays more complex games versus 15-20 years ago where the unification wasn't an obstruction to efficent development.

Your comments sound almost bitter and anti-microsoft more than anything else. OpenGL nowadays is the linux of operations sytems Vs DirectX being the windows of operating systems. Much as we all love linux, for an alround effective setup that can do everything without spending a whole lifetime working it out, linux is next to useless and over time becoming even more irrelevent, just like OpenGL, it just hasn't kept up. Yes I know OpenGL is only for graphics but you get my point.

Linux is only good for XBMC, everybody who isn't a free software bigot knows that. DirectX -> OpenGL, everyone who isn't an anti microsoft bigot knows that too.

Your reasoning makes sense only on the most superficial level, when you dig deeper into your work and research you'll be embarresed at the post you have made.

 

No offense / :)


For someone who is on a soap box about someone's "ignorance" and "bias" you certainly demonstrate a significant amount of ignorance and bias ...

OpenGL is the standard for creating 3D graphics on the Wii, PS3, Nintendo DS, Nintendo 3DS, PSP, PS-Vita, Macs, Linux, ios devices, and Android devices and with Google pushing for WebGL (openGL for HTML 5) its control of the market is growing. DirectX is hugely popular aswell, but is not as ubiquitous as OpenGL.

As for the value of Linux, Linux accounts for more than 60% of all professionally managed servers and the vast majority of mobile platforms are based on a *nix system; Android (obviously) being based on Linux and ios being based on BSD. Window's market share is huge in the desktop environment, but most households will own several *nix based systems (usually without knowing it) for every windows PC they own, and most businesses will have more *nix devices that they're maintaining (often without knowing it).



Soleron said:


Core IPC and clockspeed can be increased on phones/tablets for at least the next five years, they are nowhere near the limit. For example ARM was A8 two years ago, A9 now and A15 next year, single core performance increasing hugely each time and it will continue. We're at about 1GHz and the limit is ~5GHz

Clock rates are the frequency at which the transisters etc' operate at.

We haven't hit a "practical limit" thus far, new technologies and techniques are always being discovered which can improve how quickly a transister switches and the frequencies that they operate at. - For example, I remember reading a few years ago of a 100ghz transister.

Take the Intel Atom. - It is actually paired with low-powered transisters, these don't scale in frequency to well but they do save on power consumption.
The Core i7 series however uses transisters which scale in frequency far more aggressively, however they will and do use that little bit more power to pull it off.

Also, extreme overclockers managed to break the 8ghz barrier on the new AMD FX chips, so that 5ghz wall was effectively smashed, a few more die shrinks and maybe the 3D transisters may even improve that situation for stock clocks. (Global Foundries is also working on 3D Transister tech.)

There is allot to CPU design, more than what most people realise, you can watch how a CPU is made here (Dumbed down of course and not showing any architectural stuff.) -http://www.youtube.com/watch?v=qLGAoGhoOhU

For phones and consoles... You can have them running at 5 or 10ghz. But they are designed to be: Cheap, Small and Energy Efficient.
They will never match a proper Desktop processor that is designed to end up in systems costing several thousands of dollars with a several year fabrication process advantage, it simply cannot be done, the Cell was no exception, although it could perform some tasks incredibly quickly, but it has it's inefficiencies.

Now in regards to DirectX... Yes you won't see it on a non-Microsoft console.
However, AMD, nVidia, Microsoft and other companies "get together" about designing new features and standards that ends up in Direct X, this has a flow on effect to the hardware as AMD and nVidia release there products to comply with the standard as they have done for almost 2 decades.
The PS3's GPU might not be using Direct X... But rest assured that it's feature set is fully Direct X 9 compliant thanks to the console using an old Geforce 7 PC graphics card, which is fully accessible to developers.

The next generation aka, the PS4 again won't use Direct X, however it's graphics hardware should end up being fully Direct X 11 compatible, that means you finally get to experience Tessellation, Advanced Shadows and Shaders even without the API support thanks to the work Microsoft has done collaborating with hardware manufacturers in the PC space.
Example of Tessellation in-case no one knows:

Worthy to note, the Xbox 360 has a Tessellator (The PS3 doesn't), however it's fairly underpowered and only used sparingly like in water etc'.
Next gen should be interesting, we should finally have proper geometry on almost everything instead of Bump mapping that is used now, so bricks/rocks should have real actuall depth and not just be a flat surface, can't wait!



--::{PC Gaming Master Race}::--

Around the Network
Pemalite said:
Soleron said:


Core IPC and clockspeed can be increased on phones/tablets for at least the next five years, they are nowhere near the limit. For example ARM was A8 two years ago, A9 now and A15 next year, single core performance increasing hugely each time and it will continue. We're at about 1GHz and the limit is ~5GHz

Clock rates are the frequency at which the transisters etc' operate at.

We haven't hit a "practical limit" thus far, new technologies and techniques are always being discovered which can improve how quickly a transister switches and the frequencies that they operate at. - For example, I remember reading a few years ago of a 100ghz transister.

Not on CMOS they aren't. Silicon CMOS will be used for the next 5-10 years regardless of what you read about, it's too expensive to make other technologies work at present. I can say this because Intel's roadmap goes to 11nm in ~5 years time.

Take the Intel Atom. - It is actually paired with low-powered transisters, these don't scale in frequency to well but they do save on power consumption.
The Core i7 series however uses transisters which scale in frequency far more aggressively, however they will and do use that little bit more power to pull it off.

There have been no desktop CPUs with reasonable thermals (i.e. <=130W TDP) that have gone above 4.5GHz yet. It has been stagnant for years ever since the Pentium 4 3.8GHz that was 7 years ago. That is why I'm saying 5GHz limit. It's a heat/power issue not a tech one.

Also, extreme overclockers managed to break the 8ghz barrier on the new AMD FX chips, so that 5ghz wall was effectively smashed, a few more die shrinks and maybe the 3D transisters may even improve that situation for stock clocks. (Global Foundries is also working on 3D Transister tech.)

Lolno. 3D transistors won't improve Ivy Bridge's clocks much, just like every tech advance since the 90nm Pentium 4. Look at the leaked roadmaps.

There is allot to CPU design, more than what most people realise, you can watch how a CPU is made here (Dumbed down of course and not showing any architectural stuff.) -http://www.youtube.com/watch?v=qLGAoGhoOhU

I've been following this since 2006.

For phones and consoles... You can have them running at 5 or 10ghz. But they are designed to be: Cheap, Small and Energy Efficient.
They will never match a proper Desktop processor that is designed to end up in systems costing several thousands of dollars with a several year fabrication process advantage, it simply cannot be done, the Cell was no exception, although it could perform some tasks incredibly quickly, but it has it's inefficiencies.

That is what I am saying. They are at 1GHz. I think we can get them up to 3-4GHz before they hit a thermal wall. Mobile chips will come up to current desktop performance in a few years and ~2008 desktop levels are all that is ever needed for games as you become limited by game budget before CPU.





I don't really know where to start so I'll just pick off random topics that spurred up in this thread:

1.
Directx vs Opengl: These are just libraries (with different standards/API calls) that aid programmers in 'creating' their graphics. Originally, they are just simple coordinate systems used for creating simple shapes, but over the years people amended the libraries to create support for textures, aliasing and even animation. If you are creating a new system architecture with a newly developed graphics card (Like a PS4 or Xbox720), there won't be a DirectX or Opengl support for your system. The existing library can still probably do simple stuff like Draw or CreateTexture if the instruction sets are still there but most of the other functionality like Texture Copy are probably not optimized or non-useable. This is where architect/GPU programmers come in, they will then have to modify DirectX or Opengl libraries (they avoid writing their own coordinate standards so people don't need to learn a new coordinate system) to use the proper instructions sets that they created and provide these modified directx/opengl in their developers kit.

So which one is better? Given a non-bias hardware.. Neither, they can both achieve the same result but Nvidia added specific DirextX optimized instruction sets in their more recent commercialized graphics card to give DirectX an edge for standard PC game development. However, if Sony asked Nvidia to create a graphics card for their new PS4, no doubt they would request Nvidia to add Opengl optimized instruction sets.

2.
Java vs C++
The most obvious pro of using Java is the ability to port the application anywhere JVM is runnable while the most obvious con is not having control of the JVM that runs your application. Memory management, including garbage collect is done by the JVM. You have no way of knowing whether or not it is aligning the memory for optimal memory usage/access. Most JVM applications uses more memory than needed, and the developers are forced to juggle with JVM.

C++ gives you the control and exposure to memory address space you need to optimize your code but the downside is that its more difficult to port your code to other systems. Developers are lucky that PS3 and Xbox360 both user a PowerPC as their core so it made it easier for them to port. Some game developers even made their own engine to allow code to recompile in Xbox architecture or PS3.

Systems that uses c++ will let you push the system to the max potential, while those who uses Java will always be limited by the JVM and not the hardware. Another pro and con worth considering by game studio is that it is far easier to find a Java Developer than a C++.

3.
Mobile, Ghz, IPC, Performance.
This is where my facepalm comes in. When dealing with mobile hardware, no one cares about metrics like ghz or ipc. They can easily make the mobile device have the same performance as a Core i7. It is the Performance per Watts is where they are being stumped. More power = more transistors = more energy consumption = mobile hardware becoming more useless. AND NO! Adding more core does not make things go faster and use less power. You can only get better IPC with more cores when you can make your code more paralleled and even that has a limit (Read Amahl's law). If they want mobile to get faster, they have to do it such a way that it can run for acceptable amount of time.

I hope I cleared some mis-conception up, if you disagree or I have some facts wrong please let me know!



Soleron said:

Not on CMOS they aren't. Silicon CMOS will be used for the next 5-10 years regardless of what you read about, it's too expensive to make other technologies work at present. I can say this because Intel's roadmap goes to 11nm in ~5 years time.

That's why I said that: "new technologies and techniques are always being discovered which can improve how quickly a transister switches and the frequencies that they operate at."
You cannot expect technology to stand still.

Soleron said:
There have been no desktop CPUs with reasonable thermals (i.e. <=130W TDP) that have gone above 4.5GHz yet. It has been stagnant for years ever since the Pentium 4 3.8GHz that was 7 years ago. That is why I'm saying 5GHz limit. It's a heat/power issue not a tech one.

 

The Pentium 4 is probably a bad example of this.

However, you also have to Remember that the Pentium 4 started off at 1.3ghz and scaled up to 3.8ghz on the same Architecture (Netburst.)
That's more than a doubling in clock speed.

Take the first Pentium 4 revision, the Willematte on the 180nm fabrication process, that wen't from 1.3ghz to 2ghz.
Then the second revision was released on the 130nm fabrication process, aka. The Northwood; which scaled from 1.6ghz to 3.4ghz while staying in the same TDP.
Then the Prescott on the 90nm fabrication process wen't from 2.4ghz to 3.8ghz.
The Tualatin didn't scale very well in clock speeds despite being built on the same fabrication process as the Northwood, (130nm)

The thing to take note of is that, the Prescott did have it's pipelines increased and additional cache added and the architecture just simply didn't scale as well as Intel had hoped.

However... After the Pentium 4, Intel and AMD began to use the TDP and transister budget to increase core counts and not to increase the clock speeds as seen with the Pentium D and the Athlon x2 series.

They also took a different design approach and instead of building a processor then ramping up the speeds... They now build it from the opposite end of the scale, they target a frequency then go downwards. If yields are great then they have room to move upwards.

You also have to keep in mind that... Adding more cores consumes more power which can keep clock speeds low.
I can disable 7 cores on my AMD FX processor and break past the 5ghz barrier on air. But with all 8 cores enabled, I can only hit 4.8ghz.

So that is partially the reason on why clock speeds haven't been increasing as rapidly in the past.

Soleron said:


Lolno. 3D transistors won't improve Ivy Bridge's clocks much, just like every tech advance since the 90nm Pentium 4. Look at the leaked roadmaps.

Yes it will, it all comes down to leakage.
As you go smaller in the fabrication process, the harder it is to prevent leakage.
Now the vast majority of leakage creates heat, heat drives up the TDP of a processor which can hamper the amount of cores and/or clock speeds of a processor.
Intel's roadmaps might not show a clock speed improvement, but I bet the overclockers will have allot of fun with it, just like how the AMD FX series has no clock speed improvement over the Phenom 2's, but hell it overclocks like no chip before it.
Remember, with less leakage you can go two ways: Higher clocks or lower voltages.


Soleron said:


I've been following this since 2006


Hardware or Processor development in General? 6 Years isn't that long either way.



--::{PC Gaming Master Race}::--

Pemalite said:

Clock rates are the frequency at which the transisters etc' operate at.

We haven't hit a "practical limit" thus far, new technologies and techniques are always being discovered which can improve how quickly a transister switches and the frequencies that they operate at. - For example, I remember reading a few years ago of a 100ghz transister.

Take the Intel Atom. - It is actually paired with low-powered transisters, these don't scale in frequency to well but they do save on power consumption.
The Core i7 series however uses transisters which scale in frequency far more aggressively, however they will and do use that little bit more power to pull it off.

Also, extreme overclockers managed to break the 8ghz barrier on the new AMD FX chips, so that 5ghz wall was effectively smashed, a few more die shrinks and maybe the 3D transisters may even improve that situation for stock clocks. (Global Foundries is also working on 3D Transister tech.)

There is allot to CPU design, more than what most people realise, you can watch how a CPU is made here (Dumbed down of course and not showing any architectural stuff.) -http://www.youtube.com/watch?v=qLGAoGhoOhU

What I was actually talking about when I mentioned that we had hit a practical limite of clock speeds is that all major chip manufacturers have hit barriers to making commercial processors that run much faster than 4GHz.  Consider that most architectures saw a doubling of their clockspeed every 18 to 24 months for decades and for the past 7 to 8 years desktop CPUs have been stuck in the 3GHz range. The reason that is most often cited for this stalling is that running these processors at higher speeds makes them run far too hot and makes them too unstable for comercial release.

While I have no doubt that single thread performance will continue to increase, the bulk of increased performance over the past decade has come from the increases to multi-threaded performance; and I suspect this will continue for the next decade.



HappySqurriel said:
Pemalite said:

Clock rates are the frequency at which the transisters etc' operate at.

We haven't hit a "practical limit" thus far, new technologies and techniques are always being discovered which can improve how quickly a transister switches and the frequencies that they operate at. - For example, I remember reading a few years ago of a 100ghz transister.

Take the Intel Atom. - It is actually paired with low-powered transisters, these don't scale in frequency to well but they do save on power consumption.
The Core i7 series however uses transisters which scale in frequency far more aggressively, however they will and do use that little bit more power to pull it off.

Also, extreme overclockers managed to break the 8ghz barrier on the new AMD FX chips, so that 5ghz wall was effectively smashed, a few more die shrinks and maybe the 3D transisters may even improve that situation for stock clocks. (Global Foundries is also working on 3D Transister tech.)

There is allot to CPU design, more than what most people realise, you can watch how a CPU is made here (Dumbed down of course and not showing any architectural stuff.) -http://www.youtube.com/watch?v=qLGAoGhoOhU

What I was actually talking about when I mentioned that we had hit a practical limite of clock speeds is that all major chip manufacturers have hit barriers to making commercial processors that run much faster than 4GHz.  Consider that most architectures saw a doubling of their clockspeed every 18 to 24 months for decades and for the past 7 to 8 years desktop CPUs have been stuck in the 3GHz range. The reason that is most often cited for this stalling is that running these processors at higher speeds makes them run far too hot and makes them too unstable for comercial release.

While I have no doubt that single thread performance will continue to increase, the bulk of increased performance over the past decade has come from the increases to multi-threaded performance; and I suspect this will continue for the next decade.


No, they simply stopped scaling through Clock speeds is simply because it provides no added benefits. In order to make use of a 4+ghz clock, you need a larger pipeline otherwise many of the cycles are just sitting there doing nothing and creating more pipeline stages are pretty capped out because you will eventually get stalled anyways while waiting for things like memory write to complete.

Heat and Power consumption create different issues where the clock timming gets messed up and create errors but it can be remedied to with additional cooling which again adds to the increased power consumption (Yeah, you need to use energy to remove energy).

SImply, they stopped creating high clock speed is due to poor efficiency, not technical debt.