Jump to content

The Next Gen consoles


Major Britten
 Share

Recommended Posts

Gerbiks a pc gamer, he doesn't have any friends, though there are two strange men that he once nodded at in HMV and they lug their rigs around to his gaff once a fortnight to set up a LAN, sit in silence and play anime torture simulators.

What do you do, scottr? I imagine all your mates come over dressed in yoshi romper suits and you all roll around on your brightly coloured carpet eating fizzwops and jellytails, giggling and throwing multicoloured vomit up over each others sugar-high rictus grins.

Then you all get on the bouncy castle controller and make mario do a fungame.

"I'm saving the princess!" you scream, waving your arms around and smacking everyone in the face.

Tears of laughter :lol:

Link to comment
Share on other sites

http://kotaku.com/59...ve-specs-so-far

This what Kotaku say they know about the current PS4 dev kits.

SPECS

We'll begin with the specs. And before we go any further, know that these are current specs for a PS4 development kit, not the final retail console itself. So while the general gist of the things you see here may be similar to what makes it into the actual commercial hardware, there's every chance some—if not all of it—changes, if only slightly.

That being the case, here's what we know is inside PS4 development kits—model # DVKT-KS000K—as of January 2013. As you'll see, some things have changed since earlier kits became available in March 2012.

System Memory: 8GB

Video Memory: 2.2 GB

CPU: 4x Dual-Core AMD64 "Bulldozer" (so, 8x cores)

GPU: AMD R10xx

Ports: 4x USB 3.0, 2x Ethernet

Drive: Blu-Ray

HDD: 160GB

Audio Output: HDMI & Optical, 2.0, 5.1 & 7.1 channels

If you think the HDD is small, remember, these are the specs for a machine that developers are using to make games on, not the console you'll own and be storing media on. And don't worry about having two ethernet ports; as this is a dev kit, one is there for local sharing/testing purposes.

Interestingly, while some of these specs (such as the 8x core CPU) match with those reported by Digital Foundry only a few days ago, others like the RAM (DF reported 4GB of GDDR5, while we've heard 8GB) differ.

We've learned there's a headphone jack on the front of the console, but it's unclear whether that's just for dev kits or is an intended feature of the final retail console.

CONTROLLER

Full size

Ever since the release of the original PlayStation, Sony has maintained roughly the same basic controller design. This trend may be continuing with the PS4, because we've learned that developers are working with—and dev kits support—both the Sixaxis and DualShock 3 controller. This suggests that, for the most part, the design and capabilities of the PS4's controller will be similar to those on the PS3. The documentation also shows a Move controller, suggesting Sony's Wii-style motion wand will work with the new console.

There is a new controller in development for the PS4, though, known internally as the Orbis Development Tool, and while it keeps many of the same features as the current pads—like the four iconic PlayStation face buttons, two thumbsticks and shoulder triggers—there's one key addition.

British site CVG speculated last week that, because they'd heard the PS4's controller was "trying to emulate the same user interface philosophies as the PS Vita", that meant it would feature a touch screen. Instead, the Orbis' controller features a capacitive touch pad, like you find on the back of a Vita (presumably it's also on the back of the PS4's controller), that can recognise two-point multi-touch. The entire pad can also be "clicked" for an additional input button.

The PS4's controller will again be capable of motion-sensing, like its PS3 predecessors, only now with improved technology like tilt correction. It will also feature vibration, which Sony has thankfully learned is a next-gen feature you need to launch with. It'll also have an RGB LED light in it.

While there have been reports of the PS4 controller featuring "biometric" technology, there was no mention of it in the information we were provided.

There's one other addition to the PS4's pad you won't find on a DualShock 3: a "Share" button. We're not exactly sure what it does. The most likely use would be to allow users to share some aspect of their gaming experience to Twitter or Facebook. Maybe a screenshot? We have no idea. But that Share button might have something to do with...

Now when we were discussing this rumour of the PS4 pad having a screen on it I did mean to say that I thought it having a touch pad like the Vita was more likely, but i'm not sure if I actually said it. I was thinking it, for sure.

There's a whole load of stuff in that article but it doesn't include rumours that the PS4 pad will have a touchpad like the Vita.

Link to comment
Share on other sites

Glad to see simultaneous multiaccount support in there, and the Share button sides like a great concept for battering screenshots around the place or inviting friends to games. Want someone to come play a game of Jute Of Callee: Modicum Warm-up? Hit Share and fire off a message to their Facebooktwitter+PSNsphere account.

Link to comment
Share on other sites

Glad to see simultaneous multiaccount support in there, and the Share button sides like a great concept for battering screenshots around the place or inviting friends to games. Want someone to come play a game of Jute Of Callee: Modicum Warm-up? Hit Share and fire off a message to their Facebooktwitter+PSNsphere account.

Yeah, a little while back there were a few comments from people talking about Facebook integration and the like I thought it has to be an absolute certainty that both consoles will feature social network intergration.

It's absolutely expected nowadays, things have changed so much since 2005/6 when the current crop came out. I cannot see them NOT including it.

Link to comment
Share on other sites

That Kotaku articles says they haven't heard anything about biometrics, but what if that's how profiles are handled? As in rather than an account being linked to a specific piece of hardware, any controller will quickly take some measurements and sign you into the appropriate account when you pick it up.

I don't know if such readings are individual/static enough for such a system to work reliably, mind.

Link to comment
Share on other sites

That Kotaku articles says they haven't heard anything about biometrics, but what if that's how profiles are handled? As in rather than an account being linked to a specific piece of hardware, any controller will quickly take some measurements and sign you into the appropriate account when you pick it up.

I don't know if such readings are individual/static enough for such a system to work reliably, mind.

Nah, that doesn't plausible at all to me. Can't see it happening.

Link to comment
Share on other sites

That Kotaku articles says they haven't heard anything about biometrics, but what if that's how profiles are handled? As in rather than an account being linked to a specific piece of hardware, any controller will quickly take some measurements and sign you into the appropriate account when you pick it up.

Sounds utterly pointless, compared to selecting a profile from a short list and pressing A. This is a home device, and doesn't really need personalised security measures.

Link to comment
Share on other sites

Timothy Lottes of Nvidia/FXAA has put up a blog post about his initial reactions to the leaked specs for both Orbis and Durango:

With all the speculation and rumors heating up again for the next generation consoles, here are my thoughts. Note I'm unfortunately an outsider this round, I don't actually know what is in either platform....

PS4

Working assuming the Eurogamer Article is mostly correct with the exception of maybe exact clocks, amount of memory, and number of enabled cores (all of which could easily change to adapt to yields).

While the last console generation is around 16x behind in performance from the current high-end single chip GPUs, this was a result of much easier process scaling and this was before reaching the power wall. Things might be much different this round, a fast console might be able to keep up much longer as scaling slows down. If Sony decided to bump up the PS4 GPU, that was a great move, and will help the platform live for a long time. If PS4 is around 2 Tflop/s, this is roughly half what a single GPU high-end PC has right now, which is probably a lot better than what most PC users have. If desktop goes to 4K displays this requires 4x the perf over 1080p, so if console maintains a 1080p target, perf/pixel might still remain good for consoles even as PC continues to scale.

The real reason to get excited about a PS4 is what Sony as a company does with the OS and system libraries as a platform, and what this enables 1st party studios to do, when they make PS4-only games. If PS4 has a real-time OS, with a libGCM style low level access to the GPU, then the PS4 1st party games will be years ahead of the PC simply because it opens up what is possible on the GPU. Note this won't happen right away on launch, but once developers tool up for the platform, this will be the case. As a PC guy who knows hardware to the metal, I spend most of my days in frustration knowing damn well what I could do with the hardware, but what I cannot do because Microsoft and IHVs wont provide low-level GPU access in PC APIs. One simple example, drawcalls on PC have easily 10x to 100x the overhead of a console with a libGCM style API.

Assuming a 7970M in the PS4, AMD has already released the hardware ISA docs to the public, so it is relatively easy to know what developers might have access to do on a PS4. Lets start with the basics known from PC. AMD's existing profiling tools support true async timer queries (where the timer results are written to a buffer on the GPU, then async read on the CPU). This enables the consistent profiling game developers require when optimizing code. AMD also provides tools for developers to view the output GPU assembly for compiled shaders, another must for console development. Now lets dive into what isn't provided on PC but what can be found in AMD's GCN ISA docs,

Dual Asynchronous Compute Engines (ACE) :: Specifically "parallel operation with graphics and fast switching between task submissions" and "support of OCL 1.2 device partitioning". Sounds like at a minimum a developer can statically partition the device such that graphics can compute can run in parallel. For a PC, static partition would be horrible because of the different GPU configurations to support, but for a dedicated console, this is all you need. This opens up a much easier way to hide small compute jobs in a sea of GPU filling graphics work like post processing or shading. The way I do this on PC now is to abuse vertex shaders for full screen passes (the first triangle is full screen, and the rest are degenerates, use an uber-shader for the vertex shading looking at gl_VertexID and branching into "compute" work, being careful to space out the jobs by the SIMD width to avoid stalling the first triangle, or loading up one SIMD unit on the machine, ... like I said, complicated). In any case, this Dual ACE system likely makes it practical to port over a large amount of the Killzone SPU jobs to the GPU even if they don't completely fill the GPU (which would be a problem without complex uber-kernels on something like CUDA on the PC).

Dual High Performance DMA Engines :: Developers would get access to do async CPU->GPU or GPU->CPU memory transfers without stalling the graphics pipeline, and specifically ability to control semaphores in the push buffer(s) to insure no stalls and low latency scheduling. This is something the PC APIs get horribly wrong, as all memory copies are implicit without really giving control to the developer. This translates to much better resource streaming on a console.

Support for upto 6 Audio Streams :: HDMI supports audio, so the GPU actually outputs audio, but no PC driver gives you access. The GPU shader is in fact the ideal tool for audio processing, but on the PC you need to deal with the GPU->CPU latency wall (which can be worked around with pinned memory), but to add insult to injury the PC driver simply just copies that data back to the GPU for output adding more latency. In theory on something like a PS4 one could just mix audio on the GPU directly into the buffer being sent out on HDMI.

Global Data Store :: AMD has no way of exposing this in DX, and in OpenGL they only expose this in the ultra-limited form of counters which can only increment or decrement by one. The chip has 64KB of this memory, effectively with the same access as shared memory (atomics and everything) and lower latency than global atomics. This GDS unit can be used for all sorts of things, like workgroup to workgroup communication, global locks, or like doing an append or consume to an array of arrays where each thread can choose a different array, etc. To the metal access to GDS removes the overhead associated with managing huge data sets on the GPU. It is much easier to build GPU based hierarchical occlusion culling and scene management with access to these kind of low level features.

Re-used GPU State :: On a console with low level hardware access (like the PS3) one can pre-build and re-use command buffer chunks. On a modern GPU, one could even write or modify pre-built command buffer chunks from a shader. This removes the cost associated with drawing, pushing up the number of unique objects which can be drawn with different materials.

FP_DENORM Control Bit :: On the console one can turn off both DX's and GL's forced flush-to-denorm mode for 32-bit floating point in graphics. This enables easier ways to optimize shaders because integer limited shaders can use floating point pipes using denormals.

128-bit to 256-bit Resource Descriptors :: With GCN all that is needed to define a buffer's GPU state is to set 4 scalar registers to a resource descriptor, similar with texture (up to 8 scalar registers, plus another 4 for sampler). The scalar ALU on GCN supports block fetch of up to 16 scalars with a single instruction from either memory or from a buffer. It looks to be trivially easy on GCN to do bind-less buffers or textures for shader load/stores. Note this scalar unit has it's own data cache also. Changing textures or surfaces from inside the pixel shader looks to be easily possible. Note shaders still index resources using an instruction immediate, but the descriptor referenced by this immediate can be changed. This could help remove the traditional draw call based material limit.

S_SLEEP, S_SETPRIO, and GDS :: These provide all the tools necessary to do lock and lock-free retry loops on the GPU efficiently. DX11 specifically does not allow locks due to fear that some developer might TDR the system. With low level access, the S_SLEEP enables placing wavefront to sleep without busy spinning on the ALUs, and the S_SETPRIO enables reducing priority when checking for unlock between S_SLEEPs.

S_SENDMSG :: This enables a shader to force a CPU interrupt. In theory this can be used to signal to a real-time OS completion of some GPU operation to start up some CPU based tasks without needed the CPU to poll for completion. The other option would be maybe a interrupt signaled from a push buffer, but this wouldn't be able to signal from some intermediate point during a shader's execution. This on PS4 might enable tighter GPU and CPU task dependencies in a frame (or maybe even in a shader), compared to the latency wall which exists on non-real-time OS like Windows which usually forces CPU and GPU task dependencies to be a few frames apart.

Full Cache Flush Control :: DX has only implicit driver controlled cache flushes, it needs to be conservative, track all dependencies (high overhead), then assume conflict and always flush caches. On a console, the developer can easily skip cache flushes when they are not needed, leading to more parallel jobs and higher performance (overlap execution of things which on DX would be separated by a wait for machine to go idle).

GPU Assembly :: Maybe? I don't know if GCN has some hidden very complex rules for code generation and compiler scheduling. The ISA docs seem trivial to manage (manual insertion of barriers for texture fetch, etc). If Sony opens up GPU assembly, unlike the PS3, developers might easily crank out 30% extra from hand tuning shaders. The alternative is iterating on Cg, which is possible with real-time profiling tools. My experience on PC is micro-optimization of shaders yields some massive wins. For those like myself who love assembly of any arch, a fixed hardware spec is a dream.

...

I could continue here, but I'm not, by now you get the picture, launch titles will likely be DX11 ports, so perhaps not much better than what could be done on PC. However if Sony provides the real-time OS with libGCM v2 for GCN, one or two years out, 1st party devs and Sony's internal teams like the ICE team, will have had long enough to build up tech to really leverage the platform.

I'm excited for what this platform will provide for PS4-only 1st party titles and developers who still have the balls to do a non-portable game this next round.

Xbox720

Working here assuming the Eurogamer Article is close to correct. On this platform I'd be concerned with memory bandwidth. Only DDR3 for system/GPU memory pared with 32MB of "ESRAM" sounds troubling. 32MB of ESRAM is only really enough to do forward shading with MSAA using only 32-bits/pixel color with 2xMSAA at 1080p or 4xMSAA at 720p. Anything else to ESRAM would require tiling and resolves like on the Xbox360 (which would likely be a DMA copy on 720) or attempting to use the slow DDR3 as a render target. I'd bet most titles attempting deferred shading will be stuck at 720p with only poor post process AA (like FXAA). If this GPU is pre-GCN with a serious performance gap to PS4, then this next Xbox will act like a boat anchor, dragging down the min-spec target for cross-platform next-generation games.

My guess is that the real reason for 8GB of memory is because this box is a DVR which actually runs "Windows" (which requires a GB or two or three of "overhead"), but like Windows RT (Windows on ARM) only exposes a non-desktop UI to the user. There are a bunch of reasons they might ditch the real-time console OS, one being that if they don't provide low level access to developers, that it might enable a faster refresh on backwards compatible hardware. In theory the developer just targets the box like it was a special DX11 "PC" with a few extra changes like hints for surfaces which should go in ESRAM, then on the next refresh hardware, all prior games just get better FPS or resolution or AA. Of course if they do that, then it is just another PC, just lower performance, with all the latency baggage, and lack of low level magic which makes 1st party games stand out and sell the platform.

http://timothylottes.blogspot.co.uk/2013/01/orbis-and-durango.html

Link to comment
Share on other sites

PSOrbis exclusives will be potentially awesome, just like how Sony devs made the PS3 work to levels which licensees couldn't. He doesn't seem to be quite as hot on Durango, but he seems to have misread the specs for that a bit, but his theory about the NextBox copying the PC/Apple upgrade route is certainly an interesting idea.

Edit - reading some of the technical stuff he brings up makes it sound like coding to the metal really does let you get amazing performance gains, multiplatform devs won't/can't go down that route, but exclusives will.

Link to comment
Share on other sites

A fast refresh cycle is something I could see MS actually doing. Would make early performance limits less of an issue (whereas PS4 matures in software, the Xbox would simply get faster) and open up a broad range of price points. Plenty of people would buy a £100 Xbox with no antialiasing and a 1024x720 frame buffer if it played the same games and simply not care.

Link to comment
Share on other sites

VGleaks posted details of the evolution of the Orbis dev kits:

Currently, there are 3 types of devkits:

1) R10 boards with special BIOS, running in generic PC’s

2) “Initial 1″ — Early devkit

* model number: DVKT-KS000K

* SCE-provided PC equipped with R10XX board

* Runs Orbis OS

* Available July 2012

3) SoC Based Devkit: early version of the ORBIS hardware

* Available January 2013

Time to look inside of each devkit:

R10 Board (with special BIOS) assemble in a Generic PC

* Requires Windows 7 64 bit edition

* Recommend

* Sandy Bridge (Intel) or Bulldozer (AMD)

* Minimum 8 GB RAM (system memory)

* 650 Watt PSU

* VS2010 SP1

* DWM (Desktop Windows Manager) must be turned off

* Application will use Windows services for everything except GPU interface

* SCE will provide “Gnm”, a custom GPU interface

Do you remember the first Durango’s pictures? This is a very early devkit based on Windows.

DVKT-KS000K (“Initial 1″)

* Runs Orbis OS

* CPU: Bulldozer 8-core, 1.6 Ghz

* Graphics Card: R10 with special BIOS

* RAM: 8 GB (system memory)

* BD Drive

* HDD: 2.5 ” 160 GB

* Network Controller

* Custom South Bridge allows access to controller prototypes

2RsOvPJ.jpg

SoC Based Devkit

* Available January 2013

* CPU: 8-core Jaguar

* GPU: Liverpool GPU

* RAM: unified 8 GB for devkit (4 GB for the retail console)

* Subsystem: HDD, Network Controller, BD Drive, Bluetooth Controller, WLAN and HDMI (up to 1980×1080@3D)

* Analog Outputs: Audio, Composite Video

* Connection to Host: USB 3.0 (targeting over 200 MB/s),

* ORBIS Dualshock

* Dual Camera

The last devkit is the closer one to the retail console. Expect a machine with these specs or similar to these ones. Obviously, Sony could introduce changes in this features, but don’t expect deep mods.

http://www.vgleaks.c...s-roadmaptypes/

Link to comment
Share on other sites

Actually, this is all very exciting if true, because it means that the PS4 and Xbox 8 or Infinity (I reckon it'll get called that, honest) will be very different machines. The former really singing with high-end exclusives and the latter literally taking the PC into the living room.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Use of this website is subject to our Privacy Policy, Terms of Use, and Guidelines.