2026年2月26日星期四

Common Art - Spring - Lighting Workshop Module 3

 Final






Ref



Analysis + Creative process

My creative intent for this piece is actually not to make a heavily stylized environment. Instead, I want to create a natural, fresh atmosphere with a subtle sense of Chinese philosophy. For this “Lost Temple” scene, I’m aiming for an early-morning sunrise in a mountain ruin, when the mist hasn’t fully lifted yet. The fog won’t be exaggerated or overly stylized—I just want to convey a calm, everyday morning in a forgotten temple.

In the first image, I placed the entrance of the lost temple. Its worn-down, decayed state suggests that it’s an ancient place with no people around. The overall environment is relatively dark, but the brightness at the entrance is meant to draw the viewer’s attention—my goal is to guide the audience into the world of the scene through this inviting focal point.

In the second image, you can see a strong backlit setup. This is meant to communicate the time of day and the overall lighting condition of the environment. The third image reinforces what the first two are establishing: the ruined atmosphere and the sunrise timing.

The fourth image is the core shot. You can see that I didn’t add an overly dramatic top light to “glorify” the Buddha, because this is a damaged, weathered statue. I also don’t want the scene to feel overly artistic or theatrical. What I want to express is a sense of quiet plainness—despite the decay, the Buddha returns to simplicity and blends into the everyday world.

That said, I still tried to suggest a sense of sacredness, even if it’s subtle (because I don’t want it to be too exaggerated). I added a warmer rim light on the Buddha, and compositionally I framed the statue from the right looking toward the left. So in the lighting, I made a small deliberate choice: cooler light on the left side and warmer light on the right side, to hint at a gentle, warm radiance.


You can also see that I added fog to the environment. Unreal’s built-in fog effects couldn’t achieve the kind of sunrise atmosphere I wanted—where the mist hasn’t fully lifted yet—so I implemented my own volumetric fog to create that mood.





2026年2月5日星期四

Tech Art - Spring - HW4 - Matrix Rendering Transformations

 


Assignment contents: 


Implementation Logic:

For this assignment, although my previous homework provided a solid foundation, I still refactored my Matrix4x4 / Vector4 math system to make it cleaner and more complete for this mini-renderer. This includes matrix multiplication, matrix–vector multiplication, transpose, and the w-component convention to distinguish points vs. directions. On top of that, I built the full pipeline for TRS, View, Projection, and the mapping from Clip/NDC space to pixel coordinates. To validate the implementation, I created two tests (Test and Test_Advanced) and then visualized the final result.

Test (Direct verification using the assignment parameters)

In the first test, I strictly used the parameters provided by the assignment to construct and print:

  • World (TRS): W = T * R * S, with rotation composed as R = Rz * Ry * Rx

  • View: V = R^T * T(-C) (the inverse of the camera’s world transform)

  • Projection: both the Orthographic and Perspective projection matrices

  • Final: MVP = P * V * W

Then, for the 8 vertices of the unit cube, I applied the full chain:
clip = MVP * local_point
ndc = clip / w
screen/pixel mapping
and printed each vertex’s local / clip / ndc / pixel values for inspection.

Test_Advanced (A more standard approach with a Camera class)

After the first test worked, I implemented an additional Camera class to better match a standard graphics pipeline structure. This class encapsulates camera parameters and generates the corresponding matrices (especially V and P, along with resolution/frustum-related setup). In Test_Advanced, I regenerated V and P through the Camera class and combined them with the same W to form:
MVP = P * V * W

I then repeated the same vertex transformation and pixel mapping process to ensure both approaches produced consistent results.

Visualization (matplotlib is only used for drawing)

To present the final result, I used matplotlib to draw the points and edges after they were projected into pixel space (scatter/plot for points and line segments, plus axis limits and grid display). Matplotlib is only used for rendering the lines/points— all computations (matrix construction, vertex transforms, perspective divide, and NDC-to-pixel mapping) are performed entirely by my own code. Matplotlib does not participate in any of the math; it simply visualizes the pixel coordinates I computed.


P4V:

I wrote the code in PyCharm. In the PyCharm project, I set up a virtual environment and installed the required libraries.




2026年2月2日星期一

Tech Art - Spring - HW3 - Matrix Calculator 2

 


Assignment contents: 


Analysis:

This week’s tasks were fairly straightforward. The first part was to implement the matrix transformation that moves a model’s vertices from model space to world space. The second part was to build the interface for last week’s matrix calculator.

For Task 1, Nitin already explained the concept very clearly in class, so I won’t repeat it here. For Task 2, I tried to stay as faithful as possible to my previous UI design, but for practicality I also added an extra button under the Result section. It lets you copy the result matrix back into Matrix A, which makes chaining multiple operations much more convenient.


P4V:




2026年1月26日星期一

Tech Art - Spring - HW2 - Matrix Calculator



Assignment contents: 


Analysis:

The code portion of this assignment is relatively straightforward—it's essentially an implementation of basic matrix calculations. My overall approach and structure are roughly as follows:

Core data structure: The matrix is stored as a 4×4 two-dimensional list in row-major order, which makes row/column access and arithmetic operations straightforward.

Construction & creation: There are two ways to create a matrix: if no data is provided, it defaults to a zero matrix; if a 4×4 set of rows is provided, the input is validated first and then all values are converted to floats to keep computations consistent. The class also provides static helpers like Zero, Identity, and FromRows to quickly generate common matrices.

Input validation: During construction, the code checks that the input is truly 4×4 and that every element is numeric, so errors are caught early and later operations don’t fail in confusing ways.

Display output: A formatted output function prints the matrix with a fixed number of decimal places and aligns columns by computing each column’s width, making the result easier to read.

Math features: The class implements essential matrix operations: addition, subtraction, scalar multiplication, matrix multiplication (using the standard row-by-column dot product rule), and transpose (swapping rows and columns). It supports both explicit method calls and operator-based usage for convenience.

Testing & verification: A simple test routine at the bottom of the file constructs example matrices and runs each feature in sequence to confirm the outputs and results behave as expected.

UI Design:



P4V:





2026年1月14日星期三

Tech Art - Spring - HW1 - Vector Calculator

 

Result:



Assignment contents: 


Analysis:

This assignment does not require a large amount of analysis. From the provided example file, it is already very clear what we are expected to build. In essence, it is just implementing a set of math operations and visualizing the results, so the overall complexity is not high.

The only detail that needs special attention is that the vectors use homogeneous coordinates: w = 0 represents a direction vector, and w = 1 represents a point. As long as you handle the arithmetic rules between points and direction vectors correctly, everything will be fine. That mainly means you should separate cases and apply the correct semantics for each operation.

For example:

  • Addition

    • Point + Direction = Point
      (translate a point by a direction)

    • Direction + Direction = Direction
      (vector addition)

    • Point + Point = Undefined / Not Allowed
      (adding two positions usually has no geometric meaning)

  • Subtraction

    • Point − Point = Direction
      (direction from the second point to the first point)

    • Point − Direction = Point
      (move the point backward along the direction)

    • Direction − Direction = Direction
      (vector difference)

    • Direction − Point = Undefined / Not Allowed
      (subtracting a position from a direction is not meaningful)

  • Scalar multiplication

    • Direction × Scalar = Direction
      (scale direction magnitude)

    • Point × Scalar = Typically Not Allowed
      (scaling a point is not a standard affine operation unless you explicitly define an origin-based scaling rule; most pipelines treat this as undefined)

  • Dot product

    • Only defined for Direction · Direction → float

    • If a point is involved, you should either reject it or explicitly interpret it as a direction from the origin (but that changes meaning, so rejecting is usually cleaner for assignments).

  • Cross product

    • Only defined for Direction × Direction → Direction

    • Point involvement should be rejected.

  • Unitize (~)

    • Only defined for Direction → Direction

    • Unitizing a point should be rejected.

  • Angle between

    • Only defined for Direction vs Direction → float (degrees)

    • If either is zero-length, no valid solution.


P4V:





2025年12月2日星期二

Tech Art Fall Project

 Comfy Houdini

I really like Houdini as a tool. It’s extremely reusable, and I once saw someone doing something similar on Bilibili, but I felt it wasn’t convenient enough. So I wanted to build my own version to speed up the workflows I already use. PCG workflows actually match ComfyUI’s workflow style very well. If we can run some AI nodes directly inside Houdini and combine them with Houdini’s native nodes, it clearly can boost our efficiency and let us create many different workflows. Especially with Houdini 21.0, the new COP is very powerful — in some cases it can even replace Substance Designer for generating certain procedural textures.

AI is really good at accelerating traditional pipelines, especially for concept design, where it can quickly generate a lot of ideas and help open up your thinking. For 3D, AI is already completely usable for distant background assets, and some of the models generated by Hunyuan 3.0 are almost good enough for mid-ground use. The speed of progress is really fast. As a tool, AI is honestly great. I’m really looking forward to Tencent releasing 3.0 so I can deploy it locally and play with it (still wishing for it).

At the same time, because ComfyUI’s official API examples are already based on sending data over the network, I combined it with Tailscale to set up easy remote calls. Having my own little AI render farm feels really nice — I didn’t expect the knowledge I learned from game streaming remote setups to be useful here too.




2025年12月1日星期一

Common Art - Week 15 - Final - 'A' Stage

 

Terrain Texture V2


Grass V2

Sand V2

Substance Designer GrassV2

Based on the feedback from Stage B, I reworked both the grass and sand materials.
For the sand, I mainly lowered the saturation and brightness so it wouldn’t look so yellow.
For the grass, I pushed the style further and basically rebuilt the material from scratch, which took quite a bit more time.

Combined with the new grass billboard setup I made later, the updated grass material now gives a very nice overall grass look.


The Modified Parts

Trees

Terrain Blending

At the same time, based on the previous feedback, I also corrected the trees on the ground.

Now each tree no longer points along the landscape normal, but instead points straight up.

Also following the last round of suggestions, I researched how to achieve terrain blending in Unreal.
This setup mainly uses Virtual Textures. To make the terrain blend, I use two textures:

  • One texture stores the landscape height.

  • One texture stores the landscape material.

Once I have these two pieces of information, in the trunk material I compare the world-space vertex Z value with the landscape height, then use triplanar mapping to sample the landscape texture.
This gives me the terrain–trunk blending effect.

Grass V2

The Reference "Sky"

Again, based on the Stage B feedback, I decided to rebuild our grass and switch to another approach.
This time I collected references from Sky (a game made by the same team as Journey) and recreated a similar grass look.

I implemented billboard grass (always facing the camera) together with a wind system, and they work well.
The wind effect feels very soft and pleasant: each clump of grass sways gently in a regular rhythm.

Since I also have access to the landscape color information, I added that into the grass as well.
For each grass pivot in world space, I sample the landscape color and use it to tint the blades, so every grass clump matches the color of the terrain under it.

Grass V2 Test

Apple Tree & Bushes


Apple Tree


Bushes

Finally, I also created an apple tree and some cute stylized bushes.
The apples on the tree use the same wind system as the leaves, so they also have a slight secondary motion and sway gently with the foliage.

The bushes are made almost the same way as the tree leaves, so I won’t repeat the process here.

P4V