This section is a development log of my second attempt at creating my own 3D game using Unity Engine and C#. This game brings elements of my first project Unity project as a foundation, amplified by new/revised features and a deeper understanding of Unity's capabilities.
Blueprints
I created blueprints which would be bound to the movement of the player's cursor. To do this, a build layer was used to project the cursor’s ray-casting and track the blueprint’s position in relation to the player’s cursor.
Colliders were set to the main building components to evaluate the validation of building placement with previously placed buildings on the same physics layer. Given that the blueprint is anchored to a viable location, the user is
able to click and instantiate the proper buildings in its place. The functionality of the blueprints are governed by a single script component used across all blueprints. The script has been set up to accommodate all types of buildings,
sub-meshes, and composite buildings for building placement.
Placement Validation
Blueprints have three states of validity: default, invalid, and anchored. In its default state, the blueprint is rendered in a blue colour. The blueprint will follow the cursor's movements within the scene, as long as the projection
is within the bounds of the build layer. In its invalid state, the collider of the main building blueprint will be in contact with an existing object of the phyiscs layer e.g. another building. The blueprint will be rendered
red and the user is unable to place the object, even if it's snapped to an existing building anchor. The anchored state occurs when the blueprint is snapped to a viable position, and the bleuprint will be rendered white. This is
the state in which the user is able to place the intended object defined by the blueprint.
Placement Anchors
Anchors are their own prefab components which the blueprinting script component detects when in a viable snapping range. Placement indicators are shown to the user if the player is in 'Build mode'.
Camera controls including rotation, field of view, movement.
15/03/2025
Camera controls
I've implemented a tutorial (referenced below) which allows a user to move the camera through the scene. They can either (1) move by dragging the right mouse button across the screen, or (2) keep the cursor to the side of the screen
to edge scroll.
For the mouse dragging functionality, the existing implementation only allowed the camera to move across the distance determined by the mouse drag. This means that if the user kept the mouse held down, the camera would
stop moving after a short period of time (in which the original dragging distance was traversed). The original functionality also followed a 'grab and move' apprpoach i.e. the drag direction was the opposite of the direction of camera
movement. Instead, I wanted the camera movement to be sustained in the direction of the mouse drag, as long as the right mouse button was held down. In prder to match the intended functionality, I inverted the vector direction of mouse
drag camera movement to better reflect the sustained direction of movement. This was done by simply subtracting the direction vector frrom the position of the camera's focal point. To make the movement persist as long as the player
is holding down the right mouse button, I had to (1) store and update the position vector of the start of the drag in relation to the focal point of the camera. (2) The end point of the mouse drag will then compute the direction
vector of the camera movement against the relative position computed in step (1). As the map is traversed, step (1) will be updated for step (2) to use this as a reference when recalculating the movement, keeping the magnitude of the
direction (and in turn velocity) constant.
I've revised how camera controls operate compared to my previous project, to accommodate user experience. The camera rotation is now limited to pivoting around the Y axis only. I've also made it so that the parent camera focal point is the
object that is pivoting, rather than camera's own rotation and position relative to the focal point that is changing.
I've also added some hotkeys to allow the user to reset their camera position and rotation. If the user keys [Ctrl] + [Tab] once, the rotation will move back to its original rotational angle and focal
length. If the user keys [Ctrl] + [Tab]again, the camera will move towards the origin of the scene. Both of these transitions have a fixed time duration of one second. During this implementation, I learned
that the use of interpolation known as 'lerping' using Time.deltaTime multiplied by a scale factor (usually characterised as transition speed) does not actually arrive at its final target point. Instead, the value of interpolation arrives
asymptotically to its final value. After reading up on some articles, I found an alternative approach to linearly 'lerp' to a target state in a fixed time duration.
HUD fading transitions
I laid the foundations of transitioning the visibility of UI displays using the canvas group alpha channels upon certain triggers. This will become useful once certain UI elements become dependent on play modes and triggered events.
Exporting From Blender
Before exporting the models from Blender, I had to make sure the model properties were standardised and compatible for the purposes of development. This included:
The object-level scale should be to-scale. This means the underlying local-level mesh must be scaled correctly relative to the object. This makes it easier to manipulate the mesh and reference its properties once imported into Unity.
To prevent the loss of mesh data, I converted all faces into triangles to avoid intersecting places, and ensured all plane normals were outward-facing.
For the shaders to display correctly on the model meshes, the UV map must be recalculated to accommodate graphical transitions over adjacent faces. This was done by using the 'Smart UV project' option.
Once the above was complete, the export process to FBX format is as follows:
Set transform scaling to FBX Units Scale.
Set transform directions to Z forward & Y up.
Check Apply Transform.
To import into Unity:
For model import settings, check 'bake axis conversion' & 'read/write enabled'.
For animation import settings, remove duplicated animations and check 'loop time'.
For material import settings, overwrite with the materials created in Unity.
Dissolve shader on game objects.
Dissolve shader
To begin the main part of development, I created a dissolve shader to animate the creation of in-game objects. I followed an existing tutorial for the base shader graph, and controlled the dissolve rate with a script component. To replace the
dissolve shaders with the main static materials of game objects, I created a function that would take a list of static materials from the inspector, pattern match the substrings with the assigned shader materials of the mesh renderer,
and replace once the dissolve coroutine was completed. As a result, we now have a script that can receive any number of replacement static materials, as long as the currently used mesh materials has a corresponding mesh material. Multiple
current mesh materials can be replaced by the same replacement static material. The flexibility of this script means that this component can be applied to any mesh that requires this dissolve effect.
Setting up a new Unity project
After a period of more than 2 years, I've decided to get back into the world of Unity by starting a brand new project. The aim of this project is to develop an enhanced version of one of my
previous projects, by leveraging the knowledge I've gained these past two years and paying more attention to the performance of the runtime logic. I will be using a fair
share of game assets and scripts from my previous project to accelerate development, as well as new models and improved scripting. Much of the code and editor configurations will be refactored and optimised to improve maintainability,
scalability, performance, and readability.
Setting the scene
To first set up the development scene, I gave myself a refresher on how to establish the Universal Rendering Pipeline. The main purpose of this is to support the shaders and visual effects that will be featured heavily throughout the
game's development. To do this, I set up a URP Renderer Asset in project files, mapped the reference to the project configurations, and added a global volume object to the game's primary scene where the development will take place.