Tabula Rasa Game Download

Richard Garriott's Tabula Rasa
Developer(s)Destination Games
Publisher(s)NCsoft
Designer(s)Richard Garriott
Composer(s)Chris Vrenna
Clint Walsh
EngineIn-house, Proprietary
Platform(s)Microsoft Windows
Release
Genre(s)MMORPG
Mode(s)Multiplayer

Richard Garriott's Tabula Rasa was an MMORPG developed by Destination Games and published by NCsoft, designed in part by some of the creators of Ultima Online including Richard Garriott. The game was a role-playing video game that blended certain shooter aspects into the combat system. It was officially released to retail on November 2, 2007, with customers that pre-ordered the game allowed access to the live servers from October 30, 2007. The development team released updates, called 'Deployments,' nearly every month following launch.[1] The game required a monthly subscription.

Tabula Rasa is a gorgeous game, there's no doubt about that. The character models, lighting and effects are some of the best around. The character models, lighting and effects are some of the best. Tabula Rasa is an experimental simulation I created with ViperCard for StaffsJam 2020. I challenged myself to make something self-contained that could fit within the space of a single card. It's probably too short to be called a game but it's an interactive experience that attempts to explore the theme of human diversity. Before you will be able to participate in the beta test of Tabula Rasa, you will need to download and install the game client. Download and run the Tabula Rasa Installer The Tabula Rasa Installer. The Tabula Rasa installer is a small client downloader program available from the Tabula Rasa FTP site (ftp.playtr.com) that will automatically.

Tabula Rasa was about humanity's last stand against a group of aliens called the Bane. The story took place in the near future on two planets, Arieki and Foreas, which were in a state of constant conflict between the AFS (Allied Free Sentients) and the Bane. The term tabula rasa means 'clean slate' in Latin, which refers to a fresh start, or starting over.

According to the developers, the game included the ability for players to influence the outcome of a war between the player characters and the NPCs.

Tabula Rasa became free to play on January 10, 2009,[2] and closed on February 28, 2009.[3]

Background[edit]

Tabula Rasa was set in a fictional universe where the humanity has its last stand against a group of aliens called the Bane. The story took place in the near future on two planets, Arieki and Foreas, which were in a state of constant conflict between the AFS (Allied Free Sentients) and the Bane. The term tabula rasa means 'clean slate' in Latin, which refers to a fresh start, or starting over.

According to the fictional background story in Tabula Rasa, there once was an advanced alien species known as the Eloh. They freely shared their knowledge of how to convert between matter and energy with just the mind, called Logos, to other less advanced races. One of these less advanced races, the Thrax, used this power to wage war against the Eloh, a war which the Eloh won but at a great cost. This led to a great divide in the Eloh. One faction wanted to keep on spreading the knowledge as they had before. The other, called the Neph, sought to control the development of 'lesser races' to ensure they, the Neph, would always be the superior species. This inner conflict led the Neph faction to leave the Eloh and seek other allies, among them the defeated Thrax; this species along with others joined to form the Bane, which is controlled by the Neph.

As one of their first acts, they attacked the Eloh world; the surviving Eloh fled and were scattered among the worlds they had previously visited. The Bane attacked Earth sometime in our near future. Humanity was hopelessly outmatched and the majority was completely wiped out. Luckily, the Eloh had left behind some of their technology that had the ability to make wormholes to other worlds. There, humans found other species doing the same thing they had, fighting against the Bane to survive. They banded together to form the Army of the Allied Free Sentients to fight against the Bane.[4][5][6]

According to information from the game's manual, it's been roughly 5 years since Earth was attacked. It was eventually discovered that Earth had not been destroyed as once thought, but had instead become a massive staging ground for the Bane. From there they strengthened their forces and increased attacks upon the AFS.

Gameplay[edit]

Combat[edit]

The combat mixed in some aspects from shooters to add some real time action[clarification needed] elements to the game. It still was not an outright shooter and featured sticky targeting and dice rolling based on character stats underneath. Stickiness could be adjusted to fit the preference of the player. Some weapons like the shotgun did not use the sticky targeting.[7] In addition to a hit-miss system, Tabula Rasa adjusted the damage based on the situation. Real-time factors like weapon type, ammo type, stance, cover, and movement were taken into account. The enemies were reported to have AI that would try to take advantage of the terrain, their numbers, and would try to flank the players. The mix of system based combat and real-time movement and physics systems created a gameplay meant to encourage the player to think tactically}; e.g. to take cover behind a pillar to get some time to reload the weapon while the enemies were getting into position again.[8]

Missions & storytelling[edit]

Missions will be given out by NPCs but will not be static. What missions are available and even the access to the NPCs themselves are subject to how the battlefield is going. Some may be specific to control points that the player will need to reclaim from the Bane to gain access again. Missions are also to have multiple options to take. One example is destroying a dam to stop Bane forces that will also demolish a local village. A player can choose to just destroy or try to warn the village beforehand risking further advances by the Bane.[9] Referred to as 'ethical parables' they are to make up about 20% of all missions.[10] The missions the player chooses to do and the choices made during them will change the way certain NPCs treat the player's character.Some missions will deliver the player's character to private instanced spaces. One design goal of the game is to use instanced spaces to create in-depth storytelling, with puzzles, traps, and NPCs, that would be more difficult in shared spaces.Some missions will be ethically challenging. The players will have to choose from different points of view and it can alter their future progress. 'Ethical and moral dilemmas are something we definitely wanted to incorporate into the design of Tabula Rasa from the very start. The entire goal is to give you pause and allow you to think about the choices that they make in order to accomplish a mission.'[11][12]

Logos[edit]

Logos is a pictographic language left behind by the Eloh to be understood by other races. As players go through the game, they will gain Logos symbols to add to their Logos tablet, a blank slate, and begin to learn the language found throughout the game and gain special powers. Logos can be considered the equivalent of magic for Tabula Rasa, inasmuch as magic allows for incredible, otherwise unexplained acts; however, the logos are shown to be an extension of a scientific process developed by the Eloh.[13] Players can improve these abilities and the upgraded versions can add new tactical uses. Some are universal while others are class specific. Some examples range from lightning bolt attacks, sprinting, reinforcements, and poison type powers.[5][6][14] These are very hard to find, being hidden throughout Tabula Rasa.

Character creation[edit]

Tabula Rasa had a tree character class system. Everyone started out as first 'tier' (branch) Recruit and as they progress they were able to branch out. The second 'tier' included the Soldier and Specialist, which in turn had two subclasses of their own each. There were a total of 4 tiers.

Tabula Rasa also had a cloning function at each tier. It worked like a save function for characters at the branching point and allowed the player to try out the other branch without having to repeat the first several levels.[8]

Introduced in patch 1.4.6 were the hybrid characters. These were humans who have had their DNA blended with either Thrax, Forean or Brann DNA to produce different stats and bonuses to the character. Only full humans were available at the beginning, with the hybrid DNA becoming available via quest chains during play which in turn unlocked the ability to create hybrids on that server at the creation screen, or via cloning.

Dynamic battlefield[edit]

AFS and Bane forces are in constant battle with NPC forces warring over control points and bases. Which side controls these areas greatly impacts the players. Losing one of these to the Bane means that the respawn hospital, waypoints, shops, NPCs access, and base defenses are lost and turned to the Bane's advantage.[15] Players were able to help NPC assaults to take over bases or defend ones under attack. Control of these points was meant to change back and forth commonly even without player involvement, although the current implementation rarely let the Bane muster enough forces to invade a control point during peak player times. The Control Point System was one of the main gameplay features. Players that are fighting to defend or capture a CP (control point) got Prestige points which they could trade in for item-upgrades, experience boosters, a reset of either their attributes or their learned abilities or the purchase of superior or rare equipment at grey market vendors. Prestige could also be earned by defeating bosses, looting rare items, getting the max XP multiplier and by completing special missions. Later in the game, Control points became more and more important to the players, as they were necessary to be either in Bane or AFS hands to accept or complete certain missions and they become the centerpoint of most of the later maps.[16]

Wargames (PvP)[edit]

PvP (Player versus player) in Tabula Rasa was voluntary. As it stands, there were two main modes of PvP combat.

  • Wargame duels, commonly known as duels. These were initiated by challenging a player by targeting and using the radial menu. The challenged player must then consent. The wargame is over when one player dies, or when the two players are too far from each other, or one leaves the zone. These impromptu duels could be held between two players, two squads (groups), or a player against a squad.
  • Wargame feuds, commonly known as clan wars. These could only be fought by clans who have chosen to be a PvP clan (done during clan creation). Only a clan's leader could initiate or cancel a clan feud, and the request must be accepted by the challenged clan's leader. A clan war lasts 7 real-time days, during which players can fight each other without requesting consent first. During the war, kills are tallied and displayed in the players' wargame trackers. The clan with the most kills at the end of the war wins the feud. Kills are only counted if the players are within 5 levels of each other, though players of any level can fight each other.
  • Wargame maps, Edmund Range was the only implemented map that featured large scale team PvP. Using two sets of local teleporters, players could choose between the blue team or the red team. The map could be accessed via the cellar area and was only accessible to players level 45 and above. The map consisted of several Control Points which each team had to capture. At the beginning of each match there were Epic Bane inside of the Control Points to prevent rushers capturing them all, giving a team an unfair advantage at the beginning. In later patches, 'Personal Armour Units' were introduced which allowed players to fight in giant robots exclusively in Edmund Range. At the end of each match the losing team were teleported back to the main entrance and the winning team were teleported to the upper floor of the staging area. On the upper floor were unique armour sets vendors that were available to buy using prestige. Portable way points were disabled in the staging area to prevent players from cheating to the upper floor.

History[edit]

Development[edit]

In the works since May 2001, the game underwent a major revamp two years into the project. Conflicts between developers and the vague direction of the game were said to be the causes of this dramatic change. Twenty percent of the original team was replaced, and 75% of the code had to be redone.[17] Some staff working on other NCsoft projects were transferred to the Tabula Rasa development team, including City of Heroes' Community Coordinator April 'CuppaJo' Burba.[18] First re-shown at E3 2005, the game then transformed into the current science fiction setting and look.

Beta test[edit]

NCsoft began offering invitations to sign up for a limited beta test of Tabula Rasa on January 5, 2007[19] which began running on May 2, 2007.[20] Invitations were initially given out only as contest prizes, but beginning on August 8 several thousand additional invitations were distributed via the websites FilePlanet[21] and Eurogamer.[22] The non-disclosure agreement for the beta test was lifted on September 5, 2007 and the test ended on October 26, 2007 with a themed event in which players were invited to attempt to kill the character General British, played by game creator Richard Garriott.[23]

Bonus items[edit]

Two pre-order bonus packs were available on NCsoft's PlayNC website, one for Europe and one for the United States. The European pack is sold for EUR4.99, the US pack for USD4.99 in addition to buying the full retail version of the game for $49.99. Other than currency and which pack goes with which retail version (the European preorder will only be valid with the European release of the game, similarly for the US version), the packs are functionally identical, containing:

  • A serial code for unlocking bonus in-game content and beta access (once pre-order customers are able to enter into the beta)
  • Exclusive Shell Bot or Pine-Ock non-combat pets, one per character
  • Two exclusive character emotes
  • Three day head-start on the live servers

For the retail release, a standard version and a collector's edition were released. Both contain the client and an account key with 30 days of included playtime, however the Collector's edition shipped with a number of bonus items including:

  • A full colour game manual containing concept art
  • A letter briefing from General British
  • A map pack displaying the various game regions
  • An AFS Challenge Coin and set of Tabula Rasa Dog Tags
  • Fold out 'Black Ops' poster
  • 'Making of' Tabula Rasa DVD
  • 3 exclusive in-game items granted by the Collector's edition key only: The Boo Bot, a summonable non-combat pet; a set of 4 unique amour paints; and a unique character emote.
Rasa

Release[edit]

Tabula Rasa was officially released to retail on November 2, 2007, with customers that pre-ordered the game allowed access to the live servers from October 30, 2007. The development team released updates, called 'Deployments,' nearly every month following launch.[24]

Closing[edit]

On Nov 11th 2008, an open letter to the players of Tabula Rasa stated that Richard Garriott had left NCsoft to pursue other ventures.The announcement that he was leaving NCAustin and Tabula Rasa was done in an open letter to the community, though he later claimed this letter was in fact written by NCsoft as a means of forcing him out. The announcement was made while Garriott was in quarantine after returning from his spaceflight in October, and the announcement claimed he was inspired by the space travel experience to pursue other interests.

On 21 November 2008, weeks after Richard's announcement, Tabula Rasa's development team also released an open letter indicating that the game would end public service on 28 February 2009, citing a lower than expected in-game population as the major factor for the decision. Developers also announced that any active paying player as of 10:00 AM Pacific Time on November 21, 2008 will be eligible for some rewards, including paid time on other NCsoft titles (any paying subscribers joining after that point are ineligible). On Dec 9th 2008, a letter was sent by NCsoft stating that all Tabula Rasa servers would be shut down on February 28, 2009, and that Tabula Rasa would be discontinued. The servers became free to play on January 10, 2009.[2] On February 27, 2009, a message posted on the official website requested that players participate in a final assault, culminating with mutual destruction of AFS and Bane forces.[25][26][27]

Litigation[edit]

Richard Garriott sued NCsoft for $24 million[28] for damages relative to his termination from the parent company NCsoft.[29] Garriott's allegation states that NC Soft terminated his employment, then fraudulently reported his termination as willful resignation in order to preserve the right to terminate Garriot's stock options unless he exercised them himself within 90 days of termination, forcing Garriott into a decision to purchase stock with which a loss was incurred worth dozens of millions in profit for Garriott. Additionally, the news of the termination was issued while Garriott was confined to quarantine from the space flight, which was originally intended to be a publicity move to further promote the game and increase revenue. In July 2010, an Austin District Court awarded Garriott US$28 million in his lawsuit against NCsoft, finding that the company did not appropriately handle his departure in 2008. NCsoft stated that it intended to appeal the decision.[30][31] In October 2011, the United States Court of Appeals for the Fifth Circuit affirmed the judgment.[32]

Reception[edit]

Review scores
PublicationScore
1Up.comC+[33]
Eurogamer8/10
GameRevolutionC+[34]
GameSpot7.5/10
GameSpy4/5
IGN7.5/10[35]
X-Play4/5[citation needed]

Publications started to release reviews mainly after 15 November 2007, 2 weeks after the game's launch, although over a dozen wrote previews based on betas and the 3-day head start for those who pre-ordered.[36]

GameSpy gave the game 4 stars out of 5, outlining that the game's innovative combat system succeeded in redefining MMO combat, and regarded it as one of the most appealing features. Negatives were the obscure and often counterproductive crafting system, a lack of a central trading hub at the initial release and bugs involving general gameplay and reports of memory leaks.[37][38][39]

Eurogamer gave the game 8 out of 10, praising the daring-to-be-different approach to combat and to the class/cloning system, allowing players the opportunity for experimenting easily with which career path they choose. On the negative side, the crafting system and lack of an auction house were singled out. Though technical problems were also mentioned, the review notes that a recent patch corrected many of the problems they experienced with the game in that regards.[40]

References[edit]

  1. ^'PlayNC News: Dev Corner'. Archived from the original on August 20, 2008. Retrieved October 20, 2008.
  2. ^ ab'PlayNC Tabula Rasa Team Announcement'. Archived from the original on February 10, 2009. Retrieved November 21, 2008.
  3. ^'An explanation that Tabula Rasa can no longer be played'. TabulaRasaMemorial.org. Archived from the original on April 18, 2009. Retrieved March 14, 2009.
  4. ^'Backstory - Clean Slate'. Archived from the original on April 27, 2007. Retrieved March 5, 2007.
  5. ^ ab'Tabula Rasa Interview'. Archived from the original on September 27, 2007. Retrieved March 5, 2007.
  6. ^ ab'Tabula Rasa Almighty Preview'. Archived from the original on March 28, 2007. Retrieved March 12, 2007.
  7. ^'Tabula Rasa Hands On'. Retrieved June 10, 2007.[dead link]
  8. ^ abw00t Radio CuppaJo Interview 2007-01-17
  9. ^'Tabula Rasa Hands-on'. Retrieved June 10, 2007.
  10. ^'Hands-On Preview, Interview with Richard Garriott'. Archived from the original on June 29, 2007. Retrieved June 10, 2007.
  11. ^'An Audience with Lord British'. Retrieved March 5, 2007.[dead link]
  12. ^'Interview With Richard and Robert Garriott About Tabula Rasa, Massively Multiplayer Online Games, And Taking On World of Warcraft'. Archived from the original on September 29, 2007. Retrieved March 5, 2007.
  13. ^Taken from the Collector's edition version of the game manual.
  14. ^'Interview: Richard 'Lord British' Garriott'. Archived from the original on March 3, 2007. Retrieved March 5, 2007.
  15. ^GDC 2007 Tabula Rasa Demonstration Retrieved January 17, 2007.
  16. ^Hands-On Preview, Interview with Richard Garriott
  17. ^'Tabula Rasa: A Candid Look'. Retrieved March 5, 2007.
  18. ^'Welcome Recruits!'. Archived from the original on February 19, 2008. Retrieved January 29, 2008.
  19. ^'OMG Betaz!'. Archived from the original on February 19, 2008. Retrieved January 29, 2008.
  20. ^'Closed Beta Testing Starts!'. Archived from the original on May 4, 2007. Retrieved May 6, 2007.
  21. ^Limited Play Test[permanent dead link]. Retrieved August 10, 2007.
  22. ^Closed beta keysArchived October 11, 2007, at the Wayback Machine. Retrieved on 2007-08-10.
  23. ^'The Tabula Rasa End of Beta Event'. Archived from the original on 2008-01-21. Retrieved 2008-01-29.
  24. ^'PlayNC News: Dev Corner'. Archived from the original on August 20, 2008. Retrieved October 20, 2008.
  25. ^Remo, Chris (February 27, 2009). 'Tabula Rasa To Go Out With A Dark, Unusual Bang'. Gamasutra. Retrieved March 3, 2009.
  26. ^Kuchera, Ben (March 2, 2009). 'Does a game have to fail to have an ending? Tabula Rasa'. Ars Technica. Retrieved March 3, 2009.
  27. ^'Tabula Rasa Shutdown Events'. Archived from the original on March 2, 2009.
  28. ^Plunkett, Luke (5 June 2009). 'Richard Garriott Suing NCsoft For $24,000,000'. Kotaku. Gizmodo Media Group.
  29. ^Richard Garriott Sues NCSoft Over Millions in Stock OptionsArchived 2011-11-08 at the Wayback Machine
  30. ^Glasser, A.J. (30 July 2010). 'Lord British wins $28 million in NCsoft lawsuit'. GamePro. GamePro Media. Archived from the original on 2010-10-17.
  31. ^Ladendorf, Kirk (29 July 2010). 'Garriott wins $28 million jury award in NCsoft suit'. Statesman. Cox Media Group.
  32. ^Gaar, Brian (25 October 2011). 'Appeals court upholds Garriott's $28 million verdict against NCsoft'. Statesman. Cox Media Group.
  33. ^'Tabula Rasa'. 1Up.com. Archived from the original on 17 July 2012. Retrieved 24 July 2015.
  34. ^'Tabula Rasa Review'. gamerevolution.com. Archived from the original on 25 July 2008. Retrieved 24 July 2015.
  35. ^'Tabula Rasa Review'. IGN. Archived from the original on November 21, 2007. Retrieved July 24, 2015.
  36. ^'Tabula Rasa'. gamerankings.com. Retrieved July 24, 2015.
  37. ^'GameSpy: Tabula Rasa - Page 1'. gamespy.com. Retrieved July 24, 2015.
  38. ^'GameSpy: Tabula Rasa Pile-on - Page 1'. gamespy.com. Retrieved July 24, 2015.
  39. ^Richard Garriott's Tabula RasaArchived December 20, 2008, at the Wayback Machine
  40. ^'Tabula Rasa'. Eurogamer.net. November 15, 2007. Retrieved July 24, 2015.

External links[edit]

  • Tabula Rasa at NCsoft's main website
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Tabula_Rasa_(video_game)&oldid=1036999812'
GPU Gems 3 is now available for free online!

Tabula Rasa Private Server


The CD content, including demos and content, is available on the web and for download.
You can also subscribe to our Developer News Feed to get notifications of new material on the site.

Rusty Koonce
NCsoft Corporation

This chapter is meant to be a natural extension of 'Deferred Shading in S.T.A.L.K.E.R.' by Oles Shishkovtsov in GPU Gems 2 (Shishkovtsov 2005). It is based on two years of work on the rendering engine for the game Tabula Rasa, a massively multiplayer online video game (MMO) designed by Richard Garriott. While Shishkovtsov 2005 covers the fundamentals of implementing deferred shading, this chapter emphasizes higher-level issues, techniques, and solutions encountered while working with a deferred shading based engine.

19.1 Introduction

In computer graphics, the term shading refers to the process of rendering a lit object. This process includes the following steps:

  1. Computing geometry shape (that is, the triangle mesh)
  2. Determining surface material characteristics, such as the normal, the bidirectional reflectance distribution function, and so on
  3. Calculating incident light
  4. Computing surface/light interaction, yielding the final visual

Typical rendering engines perform all four of these steps at one time when rendering an object in the scene. Deferred shading is a technique that separates the first two steps from the last two, performing each at separate discrete stages of the render pipeline.

In this chapter, we assume the reader has a basic understanding of deferred shading. For an introduction to deferred shading, refer to Shishkovtsov 2005, Policarpo and Fonseca 2005, Hargreaves and Harris 2004, or another such resource.

In this chapter, the term forward shading refers to the traditional shading method in which all four steps in the shading process are performed together. The term effect refers to a Direct3D D3DX effect. The terms technique, annotation, and pass are used in the context of a D3DX effect.

The term material shader refers to an effect used for rendering geometry (that is, in the first two steps of the shading process) and light shader refers to an effect used for rendering visible light (part of the last two steps in the shading process). A body is a geometric object in the scene being rendered.

We have avoided GPU-specific optimizations or implementations in this chapter; all solutions are generic, targeting either Shader Model 2.0 or Shader Model 3.0 hardware. In this way, we hope to emphasize the technique and not the implementation.

19.10 Conclusion

Deferred shading has progressed from theoretical to practical. Many times new techniques are too expensive, too abstract, or just too impractical to be used outside of a tightly scoped demo. Deferred shading has proven to be a versatile, powerful, and manageable technique that can work in a real game environment.

The main drawbacks of deferred shading include the following:

  • High memory bandwidth usage
  • No hardware antialiasing
  • Lack of proper alpha-blending support

We have found that current midrange hardware is able to handle the memory bandwidth requirements at modest resolution, with current high-end hardware able to handle higher resolutions with all features enabled. With DirectX 10 hardware, MRT performance has been improved significantly by both ATI and NVIDIA. DirectX 10 and Shader Model 4.0 also provide integer operations in pixel shaders as well as read access to the depth buffer, both of which can be used to reduce memory bandwidth usage. Performance should only continue to improve as new hardware and new features become available.

Reliable edge detection combined with proper filtering can significantly minimize aliasing artifacts around geometry edges. Although these techniques are not as accurate as the subsampling found in hardware full-scene antialiasing, the method still produces results that trick the eye into smoothing hard edges.

The primary outstanding issue with deferred shading is the lack of alpha-blending support. We consciously sacrificed some visual quality related to no transparency support while in the deferred pipeline. However, we felt overall the gains from using deferred shading outweighed the issues.

The primary benefits of deferred shading include the following:

  • Lighting cost is independent of scene complexity.
  • Shaders have access to depth and other pixel information.
  • Each pixel is lit only once per light. That is, no lighting is computed on pixels that later become occluded by other opaque geometry.
  • Clean separation of shader code: material rendering is separated from lighting computations.

Every day, new techniques and new hardware come out, and with them, the desirability of deferred shading may go up or down. The future is hard to predict, but we are happy with our choice to use deferred shading in the context of today's hardware.

19.11 References

Hargreaves, Shawn, and Mark Harris. 2004. '6800 Leagues Under the Sea: Deferred Shading.' Available online at http://developer.nvidia.com/object/6800_leagues_deferred_shading.html.

Kozlov, Simon. 2004. 'Perspective Shadow Maps: Care and Feeding.' In GPU Gems, edited by Randima Fernando, pp. 217–244.

Martin, Tobias, and Tiow-Seng Tan. 2004. 'Anti-aliasing and Continuity with Trapezoidal Shadow Maps.' In Eurographics Symposium on Rendering Proceedings 2004, pp. 153–160.

Policarpo, Fabio, and Francisco Fonseca. 2005. 'Deferred Shading Tutorial.' Available online at http://fabio.policarpo.nom.br/docs/Deferred_Shading_Tutorial_SBGAMES2005.pdf.

Shishkovtsov, Oles. 2005. 'Deferred Shading in S.T.A.L.K.E.R.' In GPU Gems 2, edited by Matt Pharr, pp.143–166. Addison-Wesley.

Sousa, Tiago. 2005. 'Generic Refraction Simulation.' In GPU Gems 2, edited by Matt Pharr, pp. 295–306. Addison-Wesley.

Stamminger, Marc, and George Drettakis. 2002. 'Perspective Shadow Maps.' In ACM Transactions on Graphics (Proceedings of SIGGRAPH 2002) 21(3), pp. 557–562.

I would like to thank all of the contributors to this chapter, including a few at NCsoft Austin who also have helped make our rendering engine possible: Sean Barton, Tom Gambill, Aaron Otstott, John Styes, and Quoc Tran.

19.2 Some Background

In Tabula Rasa, our original rendering engine was a traditional forward shading engine built on top of DirectX 9, using shaders built on HLSL D3DX effects. Our effects used pass annotations within the techniques that described the lighting supported by that particular pass. The engine on the CPU side would determine what lights affected each body. This information, along with the lighting data in the effect pass annotations, was used to set light parameters and invoke each pass the appropriate number of times.

This forward shading approach has several issues:

  • Computing which lights affect each body consumes CPU time, and in the worst case, it becomes an O(n x m) operation.
  • Shaders often require more than one render pass to perform lighting, with complicated shaders requiring worst-case O(n) render passes for n lights.
  • Adding new lighting models or light types requires changing all effect source files.
  • Shaders quickly encounter the instruction count limit of Shader Model 2.0.

Working on an MMO, we do not have tight control over the game environment. We can't control how many players are visible at once or how many visual effects or lights may be active at once. Given our lack of control of the environment and the poor scalability of lighting costs within a forward renderer, we chose to pursue a deferred-shading renderer. We felt this could give us visuals that could match any top game engine while making our lighting costs independent of scene geometric complexity.

The deferred shading approach offers the following benefits:

  • Lighting costs are independent of scene complexity; there is no overhead of determining what lights affect what body.
  • There are no additional render passes on geometry for lighting, resulting in fewer draw calls and fewer state changes required to render the scene.
  • New light types or lighting models can be added without requiring any modification to material shaders.
  • Material shaders do not perform lighting, freeing up instructions for additional geometry processing.

Deferred shading requires multiple render target (MRT) support and utilizes increased memory bandwidth, making the hardware requirements for deferred shading higher than what we wanted our minimum specification to be. Because of this, we chose to support both forward and deferred shading. We leveraged our existing forward shading renderer and built on top of it our deferred rendering pipeline.

With a full forward shading render pipeline as a fallback, we were able to raise our hardware requirements for our deferred shading pipeline. We settled on requiring Shader Model 2.0 hardware for our minimum specification and forward rendering pipeline, but we chose to require Shader Model 3.0 hardware for our deferred shading pipeline. This made development of the deferred pipeline much easier, because we were no longer limited in instruction counts and could rely on dynamic branching support.

19.3 Forward Shading Support

Even with a deferred shading-based engine, forward shading is still required for translucent geometry (see Section 19.8 for details). We retained support for a fully forward shaded pipeline within our renderer. Our forward renderer is used for translucent geometry as well as a fallback pipeline for all geometry on lower-end hardware.

This section describes methods we used to make simultaneous support for both forward and deferred shading pipelines more manageable.

19.3.1 A Limited Feature Set

We chose to limit the lighting features of our forward shading pipeline to a very small subset of the features supported by the deferred shading pipeline. Some features could not be supported for technical reasons, some were not supported due to time constraints, but many were not supported purely to make development easier.

Our forward renderer supports only hemispheric, directional, and point lights, with point lights being optional. No other type of light is supported (such as spotlights and box lights, both of which are supported by our deferred renderer). Shadows and other features found in the deferred pipeline were not supported in our forward pipeline.

Finally, the shader in the forward renderer could do per-vertex or per-pixel lighting. In the deferred pipeline, all lighting is per-pixel.

19.3.2 One Effect, Multiple Techniques

We have techniques in our effects for forward shading, deferred shading, shadow-map rendering, and more. We use an annotation on the technique to specify which type of rendering the technique was written for. This allows us to put all shader code in a single effect file that handles all variations of a shader used by the rendering engine. See Listing 19-1. This includes techniques for forward shading static and skinned geometry, techniques for 'material shading' static and skinned geometry in our deferred pipeline, as well as techniques for shadow mapping.

Having all shader code for one effect in a single place allows us to share as much of that code as possible across all of the different techniques. Rather than using a single, monolithic effect file, we broke it down into multiple shader libraries, source files that contain shared vertex and pixel programs and generic functions, that are used by many effects. This approach minimized shader code duplication, making maintenance easier, decreasing the number of bugs, and improving consistency across shaders.

19.3.3 Light Prioritization

Our forward renderer quickly generates additional render passes as more lights become active on a piece of geometry. This generates not only more draw calls, but also more state changes and more overdraw. We found that our forward renderer with just a fraction of the lights enabled could be slower than our deferred renderer with many lights enabled. So to maximize performance, we severely limited how many lights could be active on a single piece of geometry in the forward shading pipeline.

Example 19-1. Sample Material Shader Source

Our deferred rendering pipeline can handle thirty, forty, fifty, or more active dynamic lights in a single frame, with the costs being independent of the geometry that is affected. However, our forward renderer quickly bogs down when just a couple of point lights start affecting a large amount of geometry. With such a performance discrepancy between the two pipelines, using the same lights in both pipelines was not possible.

We gave artists and designers the ability to assign priorities to lights and specify if a light was to be used by the forward shading pipeline, the deferred shading pipeline, or both. A light's priority is used in both the forward and the deferred shading pipelines whenever the engine needs to scale back lighting for performance. With the forward shading pipeline, scaling back a light simply means dropping it from the scene; however, in the deferred shading pipeline, a light could have shadows disabled or other expensive properties scaled back based on performance, quality settings, and the light's priority.

In general, maps were lit targeting the deferred shading pipeline. A quick second pass was made to ensure that the lighting in the forward shading pipeline was acceptable. Generally, the only additional work was to increase ambient lighting in the forward pipeline to make up for having fewer lights than in the deferred pipeline.

19.4 Advanced Lighting Features

All of the following techniques are possible in a forward or deferred shading engine. We use all of these in our deferred shading pipeline. Even though deferred shading is not required, it made implementation much cleaner. With deferred shading, we kept the implementation of such features separate from the material shaders. This meant we could add new lighting models and light types without having to modify material shaders. Likewise, we could add material shaders without any dependency on lighting models or light types.

Tabula

19.4.1 Bidirectional Lighting

Traditional hemispheric lighting as described in the DirectX documentation is fairly common. This lighting model uses two colors, traditionally labeled as top and bottom, and then linearly interpolates between these two colors based on the surface normal. Typically hemispheric lighting interpolates the colors as the surface normal moves from pointing directly up to directly down (hence the terms top and bottom). In Tabula Rasa, we support this traditional hemispheric model, but we also added a back color to directional lights.

With deferred lighting, artists are able to easily add multiple directional lights. We found them adding a second directional light that was aimed in nearly the opposite direction of the first, to simulate bounce or radiant light from the environment. They really liked the look and the control this gave them, so the natural optimization was to combine the two opposing directional lights into a single, new type of directional light—one with a forward color and a back color. This gave them the same control at half of the cost.

As a further optimization, the back color is just a simple N · L, or a simple Lambertian light model. We do not perform any specular, shadowing, occlusion, or other advanced lighting calculation on it. This back color is essentially a cheap approximation of radiant or ambient light in the scene. We save N · L computed from the front light and just negate it for the back color.

19.4.2 Globe Mapping

A globe map is a texture used to color light, like a glass globe placed around a light source in real life. As a ray of light is emitted from the light source, it must pass through this globe, where it becomes colored or blocked. For point lights, we use a cube map for this effect. For spotlights, we use 2D texture. These can be applied to cheaply mimic stained-glass effects or block light in specific patterns. We also give artists the ability to rotate or animate these globe maps.

Artists use globe maps to cheaply imitate shadow maps when possible, to imitate stainedg-lass effects, disco ball reflection effects, and more. All lights in our engine support them. See Figures 19-1 through 19-3 for an example of a globe map applied to a light.

Figure 19-2 A Simple Globe Map

19.4.3 Box Lights

In Tabula Rasa, directional lights are global lights that affect the entire scene and are used to simulate sunlight or moonlight. We found artists wanting to light a small area with a directional light, but they did not want the light to affect the entire scene. What they needed were localized directional lights.

Our solution for a localized directional light was a box light. These lights use our basic directional lighting model, but they are confined within a rectangular volume. They support falloff like spotlights, so their intensity can fade out as they near the edge of their light volume. Box lights also support shadow maps, globe maps, back color, and all of the other features of our lighting engine.

19.4.4 Shadow Maps

There is no precomputed lighting in Tabula Rasa. We exclusively use shadow maps, not stencil shadows or light maps. Artists can enable shadow casting on any light (except hemispheric). We use cube maps for point light shadow maps and 2D textures for everything else.

All shadow maps currently in Tabula Rasa are floating-point textures and utilize jitter sampling to smooth out the shadows. Artists can control the spread of the jitter sampling, giving control over how soft the shadow appears. This approach allowed us to write a single solution that worked and looked the same on all hardware; however, hardware-specific texture formats can be used as well for shadow maps. Hardware-specific formats can provide benefits such as better precision and hardware filtering.

Global Shadow Maps

Many papers exist on global shadow mapping, or shadow mapping the entire scene from the perspective of a single directional light. We spent a couple of weeks researching and playing with perspective shadow maps (Stamminger and Drettakis 2002) and trapezoidal shadow maps (Martin and Tan 2004). The downfall of these techniques is that the final result depends on the angle between the light direction and the eye direction. In both methods, as the camera moves, the shadow quality varies, with the worst case reducing to the standard orthographic projection.

In Tabula Rasa, there is a day and night cycle, with the sun and moon constantly moving across the sky. Dusk and dawn are tricky because the light direction comes close to being parallel to the ground, which largely increases the chance of the eye direction becoming parallel to the light direction. This is the worst-case scenario for perspective and trapezoidal shadow maps.

Due to the inconsistent shadow quality as the camera or light source moved, we ended up using a single large 2048x2048 shadow map with normal orthographic projection. This gave us consistent results that were independent of the camera or light direction. However, new techniques that we have not tried may work better, such as cascaded shadow maps.

We used multisample jitter sampling to soften shadow edges, we quantized the position of the light transform so it always pointed to a consistent location within a subpixel of the shadow map, and we quantized the direction of the light so the values going into the shadow map were not minutely changing every frame. This gave us a stable shadow with a free-moving camera. See Listing 19-2.

Example 19-2. C++ Code That Quantizes Light Position for Building the Shadow Map Projection Matrix

Local Shadow Maps

Because any light can cast shadows in our engine, with maps having hundreds of lights, the engine must manage the creation and use of many shadow maps. All shadow maps are generated on the fly as they are needed. However, most shadow maps are static and do not need to be regenerated each frame. We gave artists control over this by letting them set a flag on each shadow-casting light if they wanted to use a static shadow map or a dynamic shadow map. Static shadow maps are built only once and reused each frame; dynamic shadow maps are rebuilt each frame.

We flag geometry as static or dynamic as well, indicating if the geometry moves or not at runtime. This allows us to cull geometry based on this flag. When building static shadow maps, we cull out dynamic geometry. This prevents a shadow from a dynamic object, such as an avatar, from getting 'baked' into a static shadow map. However, dynamic geometry is shadowed just like static geometry by these static shadow maps. For example, an avatar walking under a staircase will have the shadows from the staircase fall across it.

A lot of this work can be automated and optimized. We chose not to prebuild static shadow maps; instead, we generate them on the fly as we determine they are needed. This means we do not have to ship and patch shadow map files, and it reduces the amount of data we have to load from disk when loading or running around a map. To combat video memory usage and texture creation overhead, we use a shadow map pool. We give more details on this later in the chapter.

Dynamic shadow-casting lights are the most expensive, because they have to constantly regenerate their shadow maps. If a dynamic shadow-casting light doesn't move, or doesn't move often, several techniques can be used to help improve its performance. The easiest is to not regenerate one of these dynamic shadow maps unless a dynamic piece of geometry is within its volume. The other option is to render the static geometry into a separate static shadow map that is generated only once. Each frame it is required, render just the dynamic geometry to a separate dynamic shadow map. Composite these two shadow maps together by comparing values from each and taking the lowest, or nearest, value. The final result will be a shadow map equivalent to rendering all geometry into it, but only dynamic geometry is actually rendered.

19.4.5 Future Expansion

With all lighting functionality cleanly separated from geometry rendering, modifying or adding lighting features is extremely easy with a deferred shading-based engine. In fact, box lights went from a proposal on a whiteboard to fully functional with complete integration into our map editor in just three days.

High dynamic range, bloom, and other effects are equally just as easy to add to a deferred shading-based engine as to a forward-based one. The architecture of a deferred shading pipeline lends itself well to expansion in most ways. Typically, adding features to a deferred engine is easier or not any harder than it would be for a forward shading-based engine. The issue that is most likely to constrain the feature set of a deferred shading engine is the limited number of material properties that can be stored per pixel, available video memory, and video memory bandwidth.

19.5 Benefits of a Readable Depth and Normal Buffer

A requirement of deferred shading is building textures that hold depth and normal information. These are used for lighting the scene; however, they can be used outside of the scope of lighting for various visual effects such as fog, depth blur, volumetric particles, and removing hard edges where alpha-blended geometry intersects opaque geometry.

19.5.1 Advanced Water and Refraction

In Tabula Rasa, if using the deferred shading pipeline, our water shader takes into account water depth (in eye space). As each pixel of the water is rendered, the shader samples the depth saved from the deferred shading pipeline and compares it to the depth of the water pixel. This means our water can auto-shoreline, the water can change color and transparency with eye-space depth, and pixels beneath the water refract whereas pixels above the water do not. It also means that we can do all of these things in a single pass, unlike traditional forward renderers.

Our forward renderer supports only our basic refraction feature, and it requires an additional render pass to initialize the alpha channel of the refraction texture in order to not refract pixels that are not underneath the water. This basic procedure is outlined in Sousa 2005.

In our deferred renderer, we can sample the eye depth of the current pixel and the eye depth of the neighboring refracted pixel. By comparing these depths, we can determine if the refracted pixel is indeed behind the target pixel. If it is, we proceed with the refraction effect; otherwise we do not. See Figures 19-4 and 19-5.

Figure 19-4 Water Using Forward Shading Only

Figure 19-5 Water Using Forward Shading, but with Access to the Depth Buffer from Deferred Shading Opaque Geometry

To give the artist control over color and transparency with depth, we actually use a volume texture as well as a 1D texture. The 1D texture is just a lookup table for transparency with the normalized water depth being used for the texture coordinate. This technique allowed artists to easily simulate a nonlinear relationship between water depth and its transparency. The volume texture was actually used for the water surface color. This could be a flat volume texture (becoming a regular 2D texture) or it could have two or four W layers. Again, the normalized depth was used for the W texture coordinate with the UV coordinates being specified by the artists. The surface normal of the water was driven by two independently UV-animated normal maps.

19.5.2 Resolution-Independent Edge Detection

Shishkovtsov 2005 presented a method for edge detection that was used for faking an antialiasing pass on the frame. The implementation relied on some magic numbers that varied based on resolution. We needed edge detection for antialiasing as well; however, we modified the algorithm to make the implementation resolution independent.

We looked at changes in depth gradients and changes in normal angles by sampling all eight neighbors surrounding a pixel. This is basically the same as Shishkovtsov's method. We diverge at this point and compare the maximum change in depth to the minimum change in depth to determine how much of an edge is present. This depth gradient between pixels is resolution dependent. By comparing relative changes in this gradient instead of comparing the gradient to fixed values, we are able to make the logic resolution independent.

Our normal processing is very similar to Shishkovtsov's method. We compare the changes in the cosine of the angle between the center pixel and its neighboring pixels along the same edges at which we test depth gradients. We use our own constant number here; however, the change in normals across pixels is not resolution dependent. This keeps the logic resolution independent.

We do not put any logic in the algorithm to limit the selection to 'top right' or 'front' edges; consequently, many edges become a couple of pixels wide. However, this works out well with our filtering method to help smooth those edges.

The output of the edge detection is a per-pixel weight between zero and one. The weight reflects how much of an edge the pixel is on. We use this weight to do four bilinear samples when computing the final pixel color. The four samples we take are at the pixel center for a weight of zero and at the four corners of the pixel for a weight of one. This results in a weighted average of the target pixel with all eight of its neighbors. The more of an edge a pixel is, the more it is blended with its neighbors. See Listing 19-3.

Example 19-3. Shader Source for Edge Detection

19.6 Caveats

19.6.1 Material Properties

Choose Properties Wisely

In Tabula Rasa we target DirectX 9, Shader Model 3.0-class hardware for our deferred shading pipeline. This gives us a large potential user base, but at the same time there are constraints that DirectX 10, Shader Model 4.0-class hardware can alleviate. First and foremost is that most Shader Model 3.0-class hardware is limited to a maximum of four simultaneous render targets without support for independent render target bit depths. This restricts us to a very limited number of data channels available for storing material attributes.

Assuming a typical DirectX 9 32-bit multiple render target setup with four render targets, one exclusively for depth, there are 13 channels available to store pixel properties: 3 four-channel RGBA textures, and one 32-bit high-precision channel for depth. Going with 64-bit over 32-bit render targets adds precision, but not necessarily any additional data channels.

Even though most channels are stored as an ordinal value in a texture, in Shader Model 3.0, all access to that data is via floating-point registers. That means using bit masking or similar means of compressing or storing more data into a single channel is really not feasible under Shader Model 3.0. Shader Model 4.0 does support true integer operations, however.

It is important that these channels hold generic material data that maximizes how well the engine can light each pixel from any type of light. Try to avoid data that is specific to a particular type of light. With such a limited number of channels, each should be considered a very valuable resource and utilized accordingly.

There are some common techniques to help compress or reduce the number of data channels required for material attributes. Storing pixel normals in view space will allow storing the normal in two channels instead of three. In view space, the z component of the normals will all have the same sign (all visible pixels face the camera). Utilizing this information, along with the knowledge that every normal is a unit vector, we can reconstruct the z component from the x and y components of the normal. Another technique is to store material attributes in a texture lookup table, and then store the appropriate texture coordinate(s) in the MRT data channels.

These material attributes are the 'glue' that connects material shaders to light shaders. They are the output of material shaders and are part of the input into the light shaders.

As such, these are the only shared dependency of material and light shaders. As a result, changing material attribute data can necessitate changing all shaders, material and light alike.

Encapsulate and Hide MRT Data

We do not expose the data channel or the data format of a material attribute to the material or light shaders. Functions are used for setting and retrieving all material attribute data. This allows any data location or format to change, and the material and light shaders only need to be rebuilt, not modified.

We also use a function to initialize all MRT data in every material shader. This does possibly add unnecessary instructions, but it also allows us to add new data channels in the future, and it saves us from having to modify existing material shaders. The material shader would only need to be modified if it needed to change the default value for the newly added material attribute. See Listing 19-4.

Example 19-4. Encapsulate and Hide MRT Layout from Material Shaders

19.6.2 Precision

With deferred shading, it is easy to run into issues that result from a loss of data precision. The most obvious place for loss of precision is with the storing of material attributes in the MRT data channels. In Tabula Rasa, most data channels are 8-bit or 16-bit, depending on whether 32-bit or 64-bit render targets are being used, respectively (four channels per render target). The internal hardware registers have different precisions and internal formats from the render target channel, requiring conversion upon read and write from that channel. For example, our normals are computed with the hardware's full precision per component, but then they get saved with only 8-bit or 16-bit precision per component. With 8-bit precision, our normals do not yield smooth specular highlights and aliasing is clearly visible in the specular lighting.

19.7 Optimizations

With deferred shading, the performance of lighting is directly proportional to the number of pixels on which the lighting shaders must execute. The following techniques are designed to reduce the number of pixels on which lighting calculations must be performed, and hence increase performance.

Early z-rejection, stencil masking, and dynamic branching optimizations all have something in common: dependency on locality of data. This really does depend on the hardware architecture, but it is true of most hardware. Generally, for early z-rejection, stencil masking, and dynamic branching to execute as efficiently as possible, all pixels within a small screen area need to behave homogeneously with respect to the given feature. That is, they all need to be z-rejected, stenciled out, or taken down the same dynamic branch together for maximum performance.

19.7.1 Efficient Light Volumes

We use light volume geometry that tightly bounds the actual light volume. Technically, a full screen quad could be rendered for each light and the final image would look the same. However, performance would be dramatically reduced. The fewer pixels the light volume geometry overlaps in screen space, the less often the pixel shader is executed. We use a cone-shaped geometry for spotlights, a sphere for point lights, a box for box lights, and full screen quads only for global lights such as directional lights.

Another approach documented in most deferred shading papers is to adjust the depth test and cull mode based on the locations of the light volume and the camera. This adjustment maximizes early z-rejection. This technique requires using the CPU to determine which depth test and cull mode would most likely yield the most early-z-rejected pixels.

We settled on using a 'greater' depth test and 'clockwise' winding (that is, inverted winding), which works in every case for us (our light volumes never get clipped by the far clip plane). Educated guesses can quickly pick the most likely best choice of culling mode and depth test. However, our bottlenecks were elsewhere, so we decided not to use any CPU resources trying to optimize performance via this technique.

19.7.2 Stencil Masking

Using the stencil to mask off pixels is another common technique to speed up lighting in a deferred renderer. The basic technique is to use the stencil buffer to mark pixels that a light cannot affect. When rendering the light's volume geometry, one simply sets the stencil test to reject the marked pixels.

We tried several variations of this technique. We found that on average, the performance gains from this method were not great enough to compensate for any additional draw call overhead the technique may generate. We tried performing 'cheap' passes prior to the lighting pass to mark the stencil for pixels facing away from the light or out of range of the light. This variation did increase the number of pixels later discarded by the final lighting pass. However, the draw call overhead of DirectX 9.0 along with the execution of the cheap pass seemed to cancel out or even cost more than any performance savings achieved during the final lighting pass (on average).

Tabula Rasa Game Download For Laptop

We do utilize the stencil technique as we render the opaque geometry in the scene to mark pixels that deferred shading will be used to light later. This approach excludes pixels belonging to the sky box or any pixel that we know up front will not or should not have lighting calculations performed on it. This does not require any additional draw calls, so it is essentially free. The lighting pass then discards these marked pixels using stencil testing. This technique can generate significant savings when the sky is predominant across the screen, and just as important, it has no adverse effects on performance, even in the worst case.

The draw call overhead is reduced with DirectX 10. For those readers targeting that platform, it may be worthwhile to explore using cheap passes to discard more pixels. However, using dynamic branches instead of additional passes is probably the better option if targeting Shader Model 3.0 or later.

19.7.3 Dynamic Branching

One of the key features of Shader Model 3.0 hardware is dynamic branching support. Dynamic branching not only increases the programmability of the GPU but, in the right circumstances, can also function as an optimization tool as well.

To use dynamic branching for optimization purposes, follow these two rules:

  1. Create only one or maybe two dynamic branches that maximize both the amount of skipped code and the frequency at which they are taken.
  2. Keep locality of data in mind. If a pixel takes a particular branch, be sure the chances of its neighbors taking the same branch are maximized.

With lighting, the best opportunities for using dynamic branching for optimization are to reject a pixel based on its distance from the light source and perhaps its surface normal. If normal maps are in use, the surface normal will be less homogeneous across a surface, which makes it a poor choice for optimization.

19.8 Issues

Deferred shading is not without caveats. From limited channels for storing material attribute information to constraints on hardware memory bandwidth, deferred shading has several problematic issues that must be addressed to utilize the technique.

19.8.1 Alpha-Blended Geometry

The single largest drawback of deferred shading is its inability to handle alpha-blended geometry. Alpha blending is not supported partly because of hardware limitations, but it is also fundamentally not supported by the technique itself as long as we limit ourselves to keeping track of the material attributes of only the nearest pixel. In Tabula Rasa, we solve this the same way everyone to date has: we render translucent geometry using our forward renderer after our deferred renderer has finished rendering the opaque geometry.

To support true alpha blending within a deferred renderer, some sort of deep frame buffer would be needed to keep track of every material fragment that overlapped a given pixel. This is the same mechanism required to solve order-independent transparency. This type of deep buffer is not currently supported by our target hardware.

However, our target hardware can support additive blending (a form of alpha blending) as well as alpha testing while MRTs are active, assuming the render targets use a compatible format. If alpha testing while MRTs are active, the alpha value of color 0 is used for the test. If the fragment fails, none of the render targets gets updated. We do not use alpha testing. Instead, we use the clip command to kill a pixel while our deferred shading MRTs are active. We do this because the alpha channel of render target 0 is used to store other material attribute data and not diffuse alpha. Every pixel rendered within the deferred pipeline is fully opaque, so we choose not to use one of our data channels to store a useless alpha value.

Using the forward renderer for translucent geometry mostly solves the problem. We use our forward renderer for our water and all translucent geometry. The water shader uses our depth texture from our deferred pipeline as an input. However, the actual lighting on the water is done by traditional forward shading techniques. This solution is problematic, however, because it is nearly impossible to match the lighting on the translucent geometry with that on the opaque geometry. Also, many light types and lighting features supported by our deferred renderer are not supported by our forward renderer. This makes matching lighting between the two impossible with our engine.

In Tabula Rasa, there are two main cases in which the discrepancy in lighting really became an issue: hair and flora (ground cover). Both hair and flora look best when rendered using alpha blending. However, it was not acceptable for an avatar to walk into shadow and not have his hair darken. Likewise, it was not acceptable for a field of grass to lack shadows when all other geometry around it had them.

We settled on using alpha testing for hair and flora and not alpha blending. This allowed hair and flora to be lit using deferred shading. The lighting was then consistent across hair and flora. To help minimize the popping of flora, we played around with several techniques. We considered some sort of screen-door transparency, and we even tried actual transparency by rendering the fading in flora with our forward renderer, then switching to the deferred renderer once it became fully opaque. Neither of these was acceptable. We currently are scaling flora up and down to simulate fading in and out.

19.8.2 Memory Bandwidth

Deferred shading significantly increases memory bandwidth utilization on hardware. Instead of writing to a single render target, we render to four of them. This quadruples the number of bytes written. During the lighting pass, we then sample from all of these buffers, increasing the bytes read. Memory bandwidth, or fill rate, is the single largest factor that determines the performance of a deferred shading engine on a given piece of hardware.

Tabula Rasa Game Download Pc

The single largest factor under our control to mitigate the memory bandwidth issue is screen resolution. Memory bandwidth is directly proportional to the number of pixels rendered, so 1280x1024 can be as much as 66 percent slower than 1024x768. Performance of deferred shading-based engines is largely tied to the resolution at which they are being rendered.

Checking for independent bit depth support and utilizing reduced bit depth render targets for data that does not need extra precision can help reduce overall memory bandwidth. This was not an option for us, however, because our target hardware does not support that feature. We try to minimize what material attribute data we need to save in render targets and minimize writes and fetches from those targets when possible.

When rendering our lights, we actually are using multiple render targets. We have two MRTs active and use an additive blend. These render targets are accumulation buffers for diffuse and specular light, respectively. At first this might seem to be an odd choice for minimizing bandwidth, because we are writing to two render targets as light shaders execute instead of one. However, overall this choice can actually be more efficient.

The general lighting equation that combines diffuse and specular light for the final fragment color looks like this:

Fraglit = Fragunlit x Lightdiffuse + Lightspecular·

This equation is separable with respect to diffuse light and specular light. By keeping diffuse and specular light in separate render targets, we do not have to fetch the unlit fragment color inside of our light shaders. The light shaders compute and output only two colors: diffuse light and specular light; they do not compute anything other than the light interaction with the surface.

If we did not keep the specular light component separate from the diffuse light, the light shaders would have to actually compute the final lit fragment color. This computation requires a fetch of the unlit fragment color and a fetch of any other material attribute that affects its final lit fragment color. Computing this final color in the light shader would also mean we would lose what the actual diffuse and specular light components were; that is, we could not decompose the result of the shader back into the original light components. Having access to the diffuse and specular components in render targets lends itself perfectly to high dynamic range (HDR) or any other postprocess that needs access to 'light' within the scene.

After all light shaders have executed, we perform a final full-screen pass that computes the final fragment color. This final post-processing pass is where we compute fog, edge detection and smoothing, and the final fragment color. This approach ensures that each of these operations is performed only once per pixel, minimizing fetches and maximizing texture cache coherency as we fetch material attribute data from our MRTs. Fetching material data from MRTs can be expensive, especially if it is done excessively and the texture cache in hardware is getting thrashed.

Using these light accumulation buffers also lets us easily disable the specular accumulation render target if specular lighting is disabled, saving unnecessary bandwidth. These light accumulation buffers are also great for running post-processing on lighting to increase contrast, compute HDR, or any other similar effect.

19.8.3 Memory Management

In Tabula Rasa, even at a modest 1024x768 resolution, we can consume well over 50 MB of video memory just for render targets used by deferred shading and refraction. This does not include the primary back buffer, vertex buffers, index buffers, or textures. A resolution of 1600x1200 at highest quality settings requires over 100 MB of video memory just for render targets alone.

We utilize four screen-size render targets for our material attribute data when rendering geometry with our material shaders. Our light shaders utilize two screen-size render targets. These render targets can be 32 bits per pixel or 64, depending on quality and graphics settings. Add to this a 2048x2048 32-bit shadow map for the global directional light, plus additional shadow maps that have been created for other lights.

One possible suggestion is to use render targets that are at a lower resolution, and then scale the results up at the end. This has lots of benefits, but we found the image quality poor. We did not pursue this option very far, but perhaps it could be utilized successfully for specific applications.

The amount of video memory used by render targets is only part of the issue. The lifetime and placement of these render targets in video memory have significant impact on performance as well. Even though the actual placement of these textures in video memory is out of our control, we do a couple of things to help the driver out.

We allocate our primary MRT textures back to back and before any other texture. The idea is to allocate these while the most video memory is available so the driver can place them in video memory with minimal fragmentation. We are still at the mercy of the driver, but we try to help it as much as we can.

We use a shadow-map pool and have lights share them. We limit the number of shadow maps available to the engine. Based on light priority, light location, and desired shadow-map size, we dole out the shadow maps to the lights. These shadow maps are not released but kept and reused. This minimizes fragmentation of video memory and reduces driver overhead associated with creating and releasing resources.

Related to this, we also throttle how many shadow maps get rendered (or regenerated) in any one frame. If multiple lights all require their shadow maps to be rebuilt on the same frame, the engine may only rebuild one or two of them that frame, amortizing the cost of rebuilding all of them across multiple frames.

Tabula Rasa Game Download

19.9 Results

In Tabula Rasa, we achieved our goals using a deferred shading renderer. We found the scalability and performance of deferred shading acceptable. Some early Shader 3.0 hardware, such as the NVIDIA GeForce 6800 Ultra, is able to hold close to 30 fps with basic settings and medium resolutions. We found that the latest DirectX 10 class hardware, such as the NVIDIA GeForce 8800 and ATI Radeon 2900, is able to run Tabula Rasa extremely well at high resolutions with all settings maxed out.

Figures 19-6 through 19-10 show the results we obtained with our approach.

Figure 19-6 An Outdoor Scene with a Global Shadow Map

Figure 19-7 An Indoor Scene with Numerous Static Shadow-Casting Lights

Figure 19-9 Depth and Edge Weight Visualization

  • Part I: Geometry
  • Part II: Light and Shadows
  • Part III: Rendering
  • Part IV: Image Effects
  • Part V: Physics Simulation
  • Part VI: GPU Computing