Tag Archives: Theory

Cel-Shaded Graphics: Hardware and Art

It should be pretty uncontroversial to say that software and hardware have something to do with one-another. This idea is very strange, however, given just how opposite the individual natures of software and hardware are. Computer hardware is the physical, tangible component of computer science: if you took an electron microscope and applied it to a computer, you would be able to see the doped silicon lattices in its semiconductors. Computer software, however, in its ideal form, is conceptual. Software is something that is not necessarily physical. Software possesses the ideas and ‘thoughts’ contained in its programming, and this is something that can’t be manifested physically. If we tried to look at software in its physical form–say, on a hard-drive or on an EEPROM–all we would be able to see or sense is the patterns in which a program’s atoms and electromagnetic charges have been arranged.

The Hogfather: Your Ideal Swine

The character Death in a Terry Pratchett work, Hogfather, puts the dilemma rather well. Obviously here Death is talking about morals, but the argument is analogous to concepts or ideas:

…then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.

Admittedly, it’s pretty well understood that software is written for hardware–computer instructions are specifically written for physical machines to interpret and process. And, it is in fact true that the line between software and hardware to be somewhat blurry; take, for instance, firmware. Firmware exhibits the features of both software and hardware by being (a) machine-interpretable instructions, but (b) static and physically manifested. But, if software was hardware, how have we managed to construct complex and flashy edifices like Modern Warfare 3 on top of incomprehensible patterns of transistors? How can we copy and mimic the world so convincingly if we were not, at any point, permitted to employ the use of ideas, thoughts, morals &c? In philosophical terms, we have a real dilemma here: are pieces of software just physical parts of the universe, like hardware, or are they something more? Something… metaphysical?

The direction in which I’m now heading obviously seems perilously similar to the mind-body dualism dilemma that has been so numerously and irritatingly rehashed in philosophical writing, so it should appear welcome to the reader when I say this is not what I’m doing to discuss, but that I would like to take a look at what the alleged divide between software and hardware means to video games as an art-form.

The supposed software/hardware dualism mentioned above has everything and anything to do with video games. Video games feature both hardware and software in their constitution, making them an incredibly complex beast to understand when it comes to dealing with them aesthetically. Why? Take video game graphics/visuals as an example. Kotaku recently ran two articles about the graphical prowess of the next generation of PlayStation and Xbox consoles (Graphics Don’t Make Good Characters and Remember They Have More to Offer Than Just Graphics). Both of these articles make arguments to assert that something other than video game console processing power (if not complex and fantastic visuals, which is probably more on the mark) is also needed in order to make a good video game. This other element, character development, story-telling, plot structure/substance, is not as dependent as game visuals on hardware architecture–it is something that is idealistic; semantically speaking, this doesn’t mean the pejorative term of not being pragmatic, but something that is not physically tangible. A character or a story is an idea, a kind of ghost that inhabits the physical shell of an actor or a 3D hardware-rendered model.

Thus far one could say that we seem to have identified another famous dialectic here–albeit one that is potentially illusory, as physicalists (proponents of the ‘software-is-hardware’ philosophy, as I’ll call it) are wont to interject: hardware determined visuals versus almost entirely software-inhabiting story and character development. This kind of opposition seems to have existed since time immemorial–one can easily think of the way paperback novels transformed the way books were read. The same thing must go for video games in the league of Battlefield 3. The kind of artistic malaise in games that critics like Yahtzee on The Escapist identify is evidently not something new, and will probably never cease.

Indeed this fantastic article on the Gameological Society‘s website performs an assessment of the likely future of the mainstream video game industry within a similar framework to that being pursued here. Speaking in light of the mysterious launch of Sony’s PlayStation 4, the critical component of the article comes right at the end:

Creativity thrives under limitations. People who love games understand this implicitly, since the best players find the most creative ways to succeed within the confines of the rules. The Great Train Robbery is a masterpiece not in spite of its limitations but because of them. So if David Cage doesn’t think he can produce an emotional work of art with a PlayStation 3 and an eight-figure budget, maybe he shouldn’t be in the art-making business.

Expanding the technological capabilities of our game machines is not inherently bad, but treating new tech as a magic bullet is a self-destructive delusion (if a familiar one). The reason that so many games suck is not because the technology is too modest. The reason that so many games suck is because so many games suck. Making art is hard. No microchip changes that.

The really important thing about this quote (and the whole article, really) is not that it reinforces the idea developed above that artistic ideas are independent of their physical medium. The important thing mentioned in the quote is that it brings up the idea that artistic ideas can be constrained by their physical medium. It asserts that the medium through which art forms its expression sets ‘rules’ that artists need to find creative ways in which to work. Here we have a slightly more nuanced interpretation of the software-idealism position, which mixes in a little bit of physicalism: hardware determines the boundaries of software’s concepts/ideas/thoughts/substance. One could almost call this ‘soft software determinism’ because a more radical version can immediately be imagined: hardware outright determines the substance of software. Against software determinism you obviously have software indeterminism (or free-will &c) which asserts that hardware and software aren’t causally related.

Enter Jet Set Radio

It’s important to note that the software determinism position is related to the software physicalism question because the software-as-hardware (physicalist) position directly bears on how software and hardware are causally related. If software is just hardware, its realistic-seeming appearance is just a clever arrangement of atoms and magnetic charges. This will become important later on.

In any case, consider Jet Set Radio on the Sega Dreamcast. Its artistic direction was so influential that it inspired a virtual (in the potential and not epistemological sense) artistic movement in video games. JSR legitimised the use of cel-shaded graphics at a time when realism was a dominant driving force in the artistic value system of the video game industry. Realism obviously still remains an important way of going about and making games today, but if it wasn’t for games like JSR (or Windwaker–I wonder if Nintendo saw JSR and decided to copy its visual style?), we’d be left with an overwhelming corpus of mainstream games trying to copy the visual appearance and mechanics of the real world.

Humouring our determinism discussion, we can arguably put JSR‘s cel-shaded graphics down to the limitations of the Dreamcast. See this useful comparison of the raw processing power of the Dreamcast to the Nintendo GameCube and the original Microsoft Xbox. I find it so illuminating that I regularly go back and re-read it to make sure I haven’t missed anything that it might have said. The particularly interesting point it raises about the very modest increase in RAM and, to a lesser extent, CPU processing power relative to the other hardware features of the Dreamcast in comparison to the GC and Xbox is thought-provoking. The Dreamcast‘s unique Z-buffering technology is also interesting.

cel-shading1Cel-shading is most importantly related to the lighting in a 3D-rendered game. It replaces what would otherwise be gradients of light in shadows on models with a larger incidence of solid colours. One can easily connect this to the way in which a game engine would deal with textures (something the Dreamcast was, on paper, noticeably weaker at handling than the other two consoles), and what you have is, like John Teti said in his Gameological Society article, a creative way to succeed within the limitations of Sega’s (admittedly cool) console. Or, more intriguingly, you have the hardware design of the Dreamcast subtly dictating to Smilebit how they should make their game.

What about Gauroud shading on the N64? It was a solution to the incredible challenge posed by the cripplingly small texture cache on the platform. The N64 had its fair share of games with realistic graphics, but the fact that it struggled with complex textures meant that programmer’s favoured method of dealing with this–shading–lead to better performance with cartoon-ish visuals that dealt with simple colours.

More broadly, consider the transition from 2D to 3D graphics. Games like Tomb Raider were unthinkable in the late 80s–the hardware processing power simply didn’t exist to render such complex environments. Games are more or less forced to resort to 2D graphics when being based on more limited hardware. The kinds of experiences 2D graphics can deliver can be brilliant substantively/qualitatively, but they must be far more limited formally than those based in three dimensions: Zelda-esque RPGs, platformers, 2D RTS games. Indeed, search and you will find that 2D platformers by far outnumbered games published in any other genre (much like first-person shooters today).

Zizek and The Matrix

Slavoj Zizek’s Parallax View gives us a good schema to not only deal with the ideal-software/physical-software dialectic, but also that of the determined-software/undetermined-software simultaneously.

In an extended passage in which Zizek discusses the operation of ideology in today political environment, he performs an analysis of The Matrix series. Following his Lacanianism, he works out that the function of the matrix is based on a kind of perversion. The perversion on which the matrix is founded is two-fold:

  1. Perceivable human reality is reduced to a virtual domain whose rules can be suspended. It’s theoretically possible for someone to do or have anything in the matrix.
  2. The concealed truth of this apparent freedom is that humanity is actually the perfect kind of slave. Ultimately passive and instrumentalised, humans are farmed for their ability to generate electrical energy.

This dialectic corresponds to, and informs that of the ideal/physical and determined/undetermined ones. Humans are not only divided into ideal and physical components in the matrix, but they are also controlled and possessed by it, in exactly the same way that a piece of game software is with its corresponding hardware platform.

The perfect conception of software as an idealistic form is that which is totally independent and undetermined by its hardware base, free of any kind of limitation or constraint. The best kind of illusion imaginable, any kind of creative desire could be pursued with this form of software: characters of infinite depth, plots infinitely complex and intriguing. The complete opposite of this would be software as a ‘slave’ to a dominant and ‘parasitic’ hardware base. Instead of being a perfect dream, software would really just be a host from which for hardware to ‘feed’. This image isn’t meant to be literal, it means that hardware is just exploiting software in an attempt to ensure its survival. A good example of this would be the way iPhones are propagated by Farmville, and other kinds of ‘casual’ games (the marriage is perfect, isn’t it? A device that is always with you, always requiring your attention). Some more examples are the way operating system update schedules work, always requiring you to keep your device ‘up-to-date’, and the way console launch titles work–to make their console seem appealing.

The way The Matrix solves this dilemma (ultimate freedom/ultimate slave) is by giving Neo the ability to practice the powers he has in the matrix outside the matrix. Zizek quite rightly regards this method of synthesising The Matrix‘s virtual and real worlds as insufficient–much like revealing at the end of a detective novel that the murderer has incomprehensible magical powers. It merely conflates the original premises of the dilemma by trying to confer upon human-kind, in its slave-form, the virtual freedom it is able to possess in the matrix.

Zizek’s preferred solution to The Matrix dialectic puts us on the right track to conceiving a coherent relationship between games and their platforms: why didn’t humans try to sabotage the matrix by refusing to secrete any more energy into it? The important thing about this possible solution is that it is negative: it destroys both humanity’s slavery by the evil machines, and its ultimate freedom in the matrix. What is left is just good old, really-existing humanity–which is in fact a very complex way of arriving at the same conclusion as John Teti: software is partly constrained and determined by the limitations of its hardware, and is in reality physical.

But by performing all of this work, while we rather uninterestingly return to the position that software does have an existence that partly transcends its physical determination (the form that this transcendence takes isn’t of immediate interest), we learn that software needs that obstacle. The idea isn’t the mundane truism that software needs hardware in order to run, but that hardware presents a game designer/developer/programmer with a mould that needs to be broken out of. A good example to demonstrate this point is the amount of first-person open-world games that proliferate our present gaming culture. We’ve reached a point in our gaming hardware development that allows us to render huge worlds, but they’re not necessarily fun or interesting–it’s basically ‘filler’. As Teti points out, Jonathan Blow’s upcoming game The Witness has attempted to make its game world as compact and as carefully constructed as possible. Like Teti, I regard this as good design, and good art. Blow’s piece is evidence of getting over the obstacle of taking orders from one’s hardware base.

Think about some games you know that did their best to transcend their hardware limitations. A few that I can think of are Exhumed for the Saturn, Super Mario 64 (compare with the horribly linear Crash Bandicoot series which stole SM64‘s hub-world concept–NB that SM64 was a launch title!), id software’s Doom and Quake, and the first two Pokemon generations.

Tagged , , ,

PS4 Architecture: More ‘Open’?

ps4 controllerRecent comments that game console software distribution is quickly becoming a media channel that is too archaic for reaching today’s gamers are somewhat misguided. Many high-profile websites like Eurogamer and Kotaku seem to be lauding the rough specifications of the new PS4 console as a sort of dramatic shift in the way console gaming has traditionally been conceived. They allege that by adopting a hardware architecture more in line with that of the PC, the PS4 will exhibit an openness as yet unseen in console gaming. Is this really true? This article from Gamasutra applies some healthy scepticism to Sony and Eurogamer’s optimism–what does an ‘open console’ mean, and, perhaps importantly, what isn’t an open console?

Confucius Say, Look To The Past

Wise old Confucius tells us to look to the past if we want to divine answers about the future, and by looking back at the console architectures of ages gone by, one can tell that he’s not far off the mark. It’s a commonly known fact that the most popular at successful game platforms of the last 40 years have usually always featured relatively open and accessible hardware architecture:

  1. Atari 2600: It may have only possessed 128 bytes of RAM, and lacked any kind of frame-buffer–not to mention Video RAM–but its CPU was a stripped down the MOS Technology 6502.
  2. NES: While its PPU featured bit-mapped sprite indices (limited sprites per TV scan-line, limited sprite sizes, colours and transformation rules), which caused developers endless frustration late in its life-cycle, it’s CPU was, again, based on the MOS 6502–it was modified to incorporate a sound generator, and more O/I addresses.
  3. Commodore 64: Famously employed the MOS 6502 in its modified form, the 6510, which incorporated a general 8-bit O/I port into its design.
  4. TurboGrafx-16: Possessed a very intelligent three-chip architecture, which featured a CPU based on a modified 6502 design. Its CPU had an additional memory management unit, allowing it to address many times more memory than the original 6502, a parallel I/O port and a programmable sound generator. The brilliant thing about the TurboGrafx/PC Engine’s architecture was that its 16-bit Video Processor and accompanying Colour Encoder chip allowed the processing ability of its CPU to be maximised, allowing what was really a much cheaper console architecture to convincingly compete with its more modern rivals.

    Starting to see a pattern emerging?

  5. Super NES: Utilised a 16-bit CPU that was based on the architecture of the MOS 6502. Its instruction set (the list of digital input data required to make the processor do something) was a superset of that of the 6502, meaning that it could emulate its operation more or less flawlessly. In designing one of the most famous and fondly-remembered game consoles of all time, Nintendo made the decision to base its architecture on a platform already well-understood.
  6. Sega Mega Drive: Had its CPU based on the Motorola 68000, which was, for a time, very popular for its power and inexpensiveness. It was used in many famous computers such as the Apple II, the first Macintoshes and the Commodore Amiga. One of the stand-out features about the 68000 family of processors is that their design philosophy focused on orthogonality. This meant that its instruction set was as divided into two types of instructions as strictly as possible: operations and address modes. Ideally, all operations and address modes should be available and compatible with one another. Very simply, operations specified how to process/manipulate information, and address modes more or less specified where that information was. For a time during the 80s, it was uncertain whether the design architecture of the 68000 or the x86 (8086, 80286 &c) Intel IBM PC architecture would come to dominate the future design of computers. In comparison to the 68000 design philosophy, the x86 architecture was (and is) incredibly ugly and unintuitive.
  7. TRS80, Sinclair ZX80/Spectrum, Sega Master System: These gaming platforms used the Zilog 80 CPU, which, along with the MOS 6502, dominated home computing in the 80s. The Commodore 128 used famously the Z80 alongside its 6502-derivative CPU so that it could attain CP/M (a widespread Operating System) compatibility.

    Compare the CPUs used in the following post-fourth generation era consoles with those of the above.

  8. Playstation 1: Used a MIPS Computer System’s R3000-family CPU, then a high-end graphics workstation processor. It was fairly well understood and documented in elite graphics-production circles, but did not have anywhere near as widely-spread commercial use as the above processors.
  9. N64: Made of use of another member of a MIPS CPU family, the VR4300. Again, a very specialised processor.
  10. PS2: The Emotion Engine; custom processor designed for the PS2 alone by Sony and Toshiba. While it features a MIPS-compatible instruction set, the processor is actually eight separate processors that were designed to work in parallel. This design architecture was notoriously difficult to harness without special effort, in comparison to the Intel Pentium 3-based Xbox and IBM PowerPC-based GameCube.
  11. PS3: Another highly customised CPU developed in conjunction by Sony with Toshiba, and this time IBM, the Cell Broadband Engine. It was another parallel-processing chip. As many commentators have said before, its potential has not been realised due to the increasing prevalence of multi-platform

While there are exceptions (barring the original Xbox, both Nintendo consoles and the latter Xbox being based on slightly more common, but still commercially obscure IBM PowerPC architecture) to the pattern constructed above, the decision by Sony to base the PS4 on x86 architecture actually presents a return to basing a gaming console on a widely understood and utilised hardware architecture. In this vein, the comments that some media outlets have been making about the purportedly archaic console-concept of game content delivery is misguided because, as history shows, consoles originally echoed the design concepts of personal computing.

Gaming in the 1980s was more-or-less exclusively powered by the MOS 6502, and to only a slightly smaller extent the Zilog 80. In the fifth generation we see something of a deviation from the use of common processor architecture in the form of MIPS CPUs. A qualification here is important–both Nintendo and Sony used MIPS technology, so, in a sense, the CPU instruction sets that they were asking programmers to deal with was very similar between their two competing systems. However, if the fifth generation was to follow the trend of the first four generations of console gaming, they should have implemented x86-compatible architecture then!

The release of the fifth generation of gaming consoles also coincides with the commercial rise of Reduced Instruction Set Computing (RISC): the design philosophy that smaller CPU instructions lead to more efficient and powerful computer processing capabilities. If we use the implementation of RISC CPUs as our independent variable for the analysis of how gaming console processors have diverged from those most commonly used everywhere else (x86 architecture) we can conclude fairly decisively that game consoles have done nothing but very significantly specialised their hardware since the mid-to-late nineties, since the PowerPC architecture is indeed RISC.

Yes, Indeed More Open

Ignoring content delivery systems (Blu-ray discs, Steam-like online content delivery &c), based on a mere cursory glance at the Central Processing Units used in game consoles since the late seventies, the PS4 architecture, if indeed based on the Intel x86 design concept, actually does present a more open platform for development. Much like the jump from the more-or-less exclusive use of assembly language to the use of higher level programming (like C) from the fourth to the fifth generation of consoles, the use of an architecture more familiar to that of the PC will allow developers to access more resources (both in terms of capital and labour) at much cheaper costs, ideally causing games to be more plentiful, and of better quality due to the ability of everyone being able to ‘talk the same language’. However this increased openness doesn’t actually present a challenge to console gaming as a tradition or a concept, because the dichotomy between console-versus-PC is actually only a relatively new event. It’s only really subsisted since the mid-to-late nineties.

Shovelware: Plato’s Republic

This might present itself as a fairly obscure and convoluted way of making what is really a very simple argument, but in The Republic, Socrates outlines to his listeners that justice manifests itself in three different compartments in people’s ‘souls’ (their personhood or psychology &c): a rational part, that pursues and lusts after truth, a ‘spirited’ part that desires nothing but honour, and an appetitive soul-section that lusts after everything else: carnal desires, food and money. Socrates argues that in order for someone to be just, these three parts of someone’s personhood need to be in the correct proportion. The same could be argued to apply to the design of console hardware.

If this strikes the reader as a fairly bizarre connection, I point them towards Egoraptor’s famous and influential (not in an academic sense, obviously) ‘Sequilitis’ video about the first and second NES Castlevania titles. His argument is actually just a derivative of Plato’s argument in The Republic! The point trying to be made here is that by basing their console on x86 hardware, Sony risks exacerbating what their Playstation 1 and 2 console libraries suffered from worst of all: poorly and cheaply developed games. While the cause of the production of all the terrible money-grubbing titles that flooded store shelves in the 90s and 2000s for the Playstations was not due to the hardware design of those consoles, but due to their branding good will, Sony are actually risking the creation of a particularly efficient cause for the development of PS4 shovelware.

While Plato’s argument about justice (in the mouth of Socrates) has fairly undemocratic consequences about politics, and correspondingly fairly elitist suggestions with respect to the artistic value of games (let alone anything: see the sections where Socrates discusses the ‘noble lie’ on which his republic would be founded (n.b. Nintendo fanboyism), and his justification for state censorship), one should at least be partly persuaded by it when the plethora of mindless first-person shooter franchises that dominate our current gaming culture are brought to mind. More than that, if we construe Plato’s 3-part soul argument more favourably, we can acknowledge that too much appetite (shovelware, mindless sequels &c) is the antithesis of good artistic values, but also see that too much lusting after truth produces illusion, and the excessive pursuit of honour is oppressive and unnecessarily violent.

While quite often success is based on something external to a planner’s purposive intention, let’s hope that this return to hardware convergence that the PS4 (and the new Xbox) seems to be making escapes excessive appetite in terms of aesthetics. Who knows, the PS4 might fail due to excessive honour, instead of appetite, like its predeccessor, and many other consoles before it (the Saturn, with its rushed redesign due to Sega’s ignorance of the rise of 3D graphics, and, in part, the N64–with Nintendo’s spate with Sony leading to their rejection of CD-ROMs as a game medium).

Tagged , , , , , , , , , , ,


titlecardAs previously alluded to in our article on the development of the Sega Saturn hardware, history is written by the victors. It’s because the Saturn ultimately failed–commercially–as a platform that games like Exhumed never got to deliver developers like Lobotomy Software the reward for their efforts that they deserved.

Exhumed (or Powerslave, as it was known in the US) is a corridor game that touts the virtues of the Saturn. With tight controls, well-executed concepts, and fantastic early fifith generation graphics, Exhumed is an example of a triumph of substance over form, a true case of game design done right. At the risk of sounding formulaic, it’s important to stress that while it is a fantastic game, it isn’t without its flaws.

I’m going to buck the trend and say that the best thing about Exhumed is its non-linearity. That’s not to say that non-linearity is a virtue in itself (cf. this Kotaku article); in this particular game, it, as a method of delivering game-play, non-linearity has been well-executed. From the very beginning, the player will get the impression that there exists some deeper intention behind the levels they’re exploring. Out-of-reach items and impassable passages might at first present themselves as confusing obstacles, but upon finding new power-ups and using good old-fashioned logic, finally satisfying your curiosity is definitely satisfying. Given Exhumed‘s Egyptian setting, it’s entirely appropriate that the player should be feeling as if they’re stabbing around in a warren of inter-related pathways (many fittingly tomb-like), some to be traversed early, some later, some many times, and some to be noticed once and then entirely forgotten. As the game progresses you will truly get the feeling that you’re moving deeper and deeper into the heart of some powerful and mysterious heart of darkness–and this is where Exhumed‘s aesthetics lend a great helping hand.

obeliskIn addition to the way it structures its substance, Exhumed‘s graphical prowess offers much to impress. I’m going to ignore the features of Exhumed‘s graphics that modern attitudes towards first-person shooters would deem unacceptable (like the lack of: two real degrees of player visual freedom: left-right, up-down; full 3D, context-sensitive game environments; complex NPC AI &c), because they don’t impact on the kind of game Exhumed was trying to be. In 1996, first-person shooters were games of a fledgling new genre. Many of them, like their proginator, Doom, were corridor games. Given that it is a corridor game, Exhumed‘s graphics/aesthetic features are of a very high standard. Combine this with the fact that it was purposefully designed for the Saturn, a platform with hardware architecture that was notoriously difficult to program, and you have something of great interest in terms of video gaming history.

The stunning thing about Exhumed‘s visuals is that they feature large environments without sacrificing its fast-paced game-play speed. This is achieved through a neat programming trick that owes much to Exhumed‘s Doom origins.

As Exhumed‘s 3D-engine programmer Ezra Dreisbach tells Eurogamer in 2009,

…the main different thing about console FPS of that era is that every wall has to be diced into a grid of polygons. This is because there is no perspective-correct texture-mapping and, in the case of the Saturn, no way to clip. You really needed some custom tools to deal with/take advantage of this, and Lobotomy had Brew (made by David Lawson).

As Dreisbach stated in his interview with Segasaturn.co.uk, overcoming this limitation in texture-mapping was achieved by

automatically [combining] the wall tile graphics into fewer “uber-tiles” and [rendering] the walls like this when they [were] far away.

machine gun lasersIt’s a simple concept, but many 3D games of the era on the Saturn failed because they refused to take account of the Saturn‘s hardware. Through original programming, Exhumed was able to pull off enormous environments with fluid animation and dynamic lighting, helping develop a proper atmosphere in which to immerse the player.

Much like the effect achieved by the game-play progression, the visuals really do convey the idea that you’re penetrating a many-thousand year-old civilisation. Sky-lit levels leave you feeling roasted, laying everything bare and brutally exposed in its openness. The swarm-like onslaught of enemies in these environments cause you to become desperate with your weapons, as there is frequently nowhere to run. Contrastingly, underground levels are suitably chilling and dank, sparsely but properly lit, all giving the strong impression that these places are musty from thousands of years of rest, previously untouched, unseen, dormant and perfectly sealed.

The following excerpt from an now-defunct Slovak game magazine does a great job at conveying an idea of what Exhumed‘s atmosphere is like. The language is somewhat over-the-top (and not perfect), but I couldn’t put it any better:

I won’t start with technical execution, graphics or sounds, but I’ll spit out immediately the most important and gigantic thing which Exhumed has: atmosphere. Atmosphere of this game is something so perfect, heavy, [colourful and full of emotional impact], that words are not enough to describe it. You will be walking inside thousand-year-old temples full of mummies, and most fantastic decorations: vases, paintings, hieroglyphs. That [is all said] with a regard for [the game’s] monumental architecture, which makes you feel–even though you are the main hero, [on] which the faith of mankind depends, [and] though you will be fearfully killing enemies with your weapons– small and unimportant. Even though you [might be] cutting with a machete the strings of the original inhabitants, the buildings remain. [So too remains the] gold, paintings and the old culture, which is [also] indestructible. The majestic columns benevolently gaze over at doings of some man with knowledge that he will [soon] leave, and will leave them to rest maybe for [many more] thousands of years.

Before moving on, it’s worth mentioning that a key feature to praise is Lobotomy‘s ability to engineer convincing transparency in the game’s water terrain–something heavily rumoured to be a weak ability of the Saturn.

sobek pass

Sobek Pass’s One-Texture … Samba

There are points where the visuals falter. The main complaint to level against Exhumed here is a common one among many games of its time: poor texturing. As an example, an early level, ‘Sobek Pass’, is almost entirely composed of one texture. See right for an image.

The difficulty in resolving different wall-faces apart from one-another makes finding your way around the level’s environment difficult, and at times frustrating. While this is regrettable, the level very cleverly riddles away the keys to its doors, and staggers its assortment of enemies in an intelligent way. This is much the same where-ever weak texture variation occurs: weakness in texturing is always moderated by the level design. The player is never unfairly forced to deal with too much of a challenge at once.

anubis roomA welcome complement to both Exhumed‘s graphics and game-play is its controls. While the main character’s enormous jump length, somewhat stilted ability to look up and down, and at times slippery pin-point on-the-spot manoeuvring takes some getting used to, Exhumed‘s auto-aim and pleasantly precise controls give it an intuitive feel. As opposed to many corridor games and Doom clones/ports, the controls don’t buck back at you in the middle of hectic fire-fights, resisting your will. It’s obvious that careful attention has been paid to the player’s interface with the game environment because one is able to learn how to get better at Exhumed. It’s rare that a console first-person shooter that relies entirely on D-pad controls features such intuitive player interaction (cf. Croc), but we here have a shining example.

firepotsModern FPS players might have a hard time adapting to what they might describe as primitive control scheme and limited game-environment context, but, that aside, Exhumed is a stellar example of the fruits of the labour of a developer who cared about their work. Lobotomy Software may have ultimately paid the price for not jumping on the same wagon as many other early 3D developers, but they produced something authentic. Effort, here, clearly translated into quality, and it is for that reason that Exhumed is definitely worth your time.

If you can scrounge together–by any means–a working version of this Saturn title, you’re guaranteed to be rewarded with an experience that creatively extends and develops the ideas that made their first exposition in Doom.

The reader can find a Lobotomy Software fan blog here and a YouTube video about the history of the developer here. A really good post to read from the fan blog that centralises a lot of information about Exhumed should not be missed. A write-up and a fairly illuminating interview about the technical aspects of Exhumed by GameFan can be found here:

I put the dynamic lights in after seeing Loaded on the PlayStation. Each of the wall polygons is being drawn gouraud shaded for the static torch light. As each vertex is transformed, the lighting contribution from the dynamic lights is added in. The algorithm is the cheapest, fastest thing I could think of that would still look okay.

EDIT: For more screenshots and another great discussion of Exhumed‘s concepts, read this  NeoGAF thread.

Tagged , , , ,