2025年12月2日星期二

Tech Art Fall Project

 Comfy Houdini

I really like Houdini as a tool. It’s extremely reusable, and I once saw someone doing something similar on Bilibili, but I felt it wasn’t convenient enough. So I wanted to build my own version to speed up the workflows I already use. PCG workflows actually match ComfyUI’s workflow style very well. If we can run some AI nodes directly inside Houdini and combine them with Houdini’s native nodes, it clearly can boost our efficiency and let us create many different workflows. Especially with Houdini 21.0, the new COP is very powerful — in some cases it can even replace Substance Designer for generating certain procedural textures.

AI is really good at accelerating traditional pipelines, especially for concept design, where it can quickly generate a lot of ideas and help open up your thinking. For 3D, AI is already completely usable for distant background assets, and some of the models generated by Hunyuan 3.0 are almost good enough for mid-ground use. The speed of progress is really fast. As a tool, AI is honestly great. I’m really looking forward to Tencent releasing 3.0 so I can deploy it locally and play with it (still wishing for it).

At the same time, because ComfyUI’s official API examples are already based on sending data over the network, I combined it with Tailscale to set up easy remote calls. Having my own little AI render farm feels really nice — I didn’t expect the knowledge I learned from game streaming remote setups to be useful here too.




2025年12月1日星期一

Common Art - Week 15 - Final - 'A' Stage

 

Terrain Texture V2


Grass V2

Sand V2

Substance Designer GrassV2

Based on the feedback from Stage B, I reworked both the grass and sand materials.
For the sand, I mainly lowered the saturation and brightness so it wouldn’t look so yellow.
For the grass, I pushed the style further and basically rebuilt the material from scratch, which took quite a bit more time.

Combined with the new grass billboard setup I made later, the updated grass material now gives a very nice overall grass look.


The Modified Parts

Trees

Terrain Blending

At the same time, based on the previous feedback, I also corrected the trees on the ground.

Now each tree no longer points along the landscape normal, but instead points straight up.

Also following the last round of suggestions, I researched how to achieve terrain blending in Unreal.
This setup mainly uses Virtual Textures. To make the terrain blend, I use two textures:

  • One texture stores the landscape height.

  • One texture stores the landscape material.

Once I have these two pieces of information, in the trunk material I compare the world-space vertex Z value with the landscape height, then use triplanar mapping to sample the landscape texture.
This gives me the terrain–trunk blending effect.

Grass V2

The Reference "Sky"

Again, based on the Stage B feedback, I decided to rebuild our grass and switch to another approach.
This time I collected references from Sky (a game made by the same team as Journey) and recreated a similar grass look.

I implemented billboard grass (always facing the camera) together with a wind system, and they work well.
The wind effect feels very soft and pleasant: each clump of grass sways gently in a regular rhythm.

Since I also have access to the landscape color information, I added that into the grass as well.
For each grass pivot in world space, I sample the landscape color and use it to tint the blades, so every grass clump matches the color of the terrain under it.

Grass V2 Test

Apple Tree & Bushes


Apple Tree


Bushes

Finally, I also created an apple tree and some cute stylized bushes.
The apples on the tree use the same wind system as the leaves, so they also have a slight secondary motion and sway gently with the foliage.

The bushes are made almost the same way as the tree leaves, so I won’t repeat the process here.

P4V


2025年11月30日星期日

Tech Art - Week15 - PyQt

 

Result

Screen Shot

Task Review

Before doing any analysis, I first restated the assignment requirements.
In short, based on the NameGenerator we implemented in HW12, we now need to build a GUI on top of it to make it much easier to use.

Requirements Overview

Analysis

Because this homework involves PyQt GUI development, the natural tool to think of is Qt Designer.
However, since the goal of this assignment is to get used to writing PyQt code by hand, I only used Qt Designer at the beginning as a layout sketching tool to clarify the UI structure. I did not use its auto-generated UI code.

Qt Designer Interface

In HW12 we already finished almost all of the core naming logic.
For better UI support, I only added a small helper function to check whether a category has a Default rule.
So the main focus of this homework is how to design and implement the GUI. PyQt itself is already quite straightforward to use, so instead of explaining every low-level detail, I will just briefly summarize my GUI design.

Overall, I divided the window into three main vertical regions (left / center / right), plus a menu bar and a status bar, so that each part of the workflow is clearly separated:

  1. Left: Rules & Settings Panel

    • Shows the path of the currently loaded JSON rules file

    • Provides a “Load Rules…” button

    • Provides a Verbose checkbox and a status label, used to switch between simple history and verbose history display modes

  2. Center: Input Panel

    • A Category combo box for choosing the main asset category

    • An Asset Type combo box for choosing a specific subtype, or selecting [Default] to use the category’s default rule

    • A Base Name line edit for entering the base asset name

    • A “Generate Name” button which calls NameGenerator.generate_name() to generate the final asset name

  3. Right: Result & History Panel

    • At the top, a read-only field that displays the last generated name, making it easy to copy

    • In the middle, a history list. Depending on whether Verbose is checked, it shows either the simple history list or the verbose version (with full details)

    • At the bottom, a Quit button. Clicking it triggers the close event and pops up a confirmation dialog before exiting

  4. Menu Bar & Status Bar

    • The menu bar provides basic entries such as File (load rules / exit) and Help (About)

    • The status bar is used to give real-time feedback: for example, whether the rules file was loaded successfully or failed, and whether a name was generated successfully

The overall idea is: reuse all the naming logic from HW12’s NameGenerator, and only add a clean GUI layer on top.
The GUI breaks the pipeline “load rules → choose type → enter base name → show result/history” into clear visual sections, so that users can complete the same workflow without writing any code.

I also implemented a simple light-gray theme.
There is no complex visual design here—since this is a small tool, I think a clean and minimal look is the best choice.

Video Demo


P4V

P4V

2025年11月22日星期六

Tech Art - Week14 - Simple Tool Design & Implementation

 

Result


Assignment Review

As always, before doing any implementation, I first restated the assignment clearly.
In short, we need to read a GeoJSON file, convert it into geometry in Houdini, and assign a different color to each region.


Approach

From reading through the task description, the difficult part of this assignment is not the procedural modeling itself.
The handout even provides example code that shows how to use a Python SOP to create geometry from point data.


The real challenge is how to extract the information we need from the GeoJSON file.
By checking online references and opening the GeoJSON in a text editor, I clarified the overall data structure: each entry contains information such as Feature, Properties, and Geometry, and the geometry can be either a single Polygon or a MultiPolygon.
The most important field is coordinates, which stores the vertex positions of every region.

One thing to pay attention to: in GeoJSON, coordinates are 2D (longitude, latitude), while in Houdini we work in 3D space, so we need to convert them.

Json-Info-Extraction


Once all the necessary information has been read from the GeoJSON file, the rest becomes relatively straightforward.
This assignment is purely about data visualization, and doesn’t involve object interactions or complex behaviors.
As long as the data-processing flow is clean and well structured, it’s easy to keep the overall logic clear.
Based on the assignment prompt and the way we extracted data from the JSON file, I split the whole assignment into four major functional parts to implement.


For the procedural modeling part using a Python SOP, polygons that contain holes require special handling.
We cannot directly create holes using only the Python SOP polygon, so we need to combine it with a Boolean node to fully support all these cases.

Taking “Kyrgyzstan” as an example, this region is of type Polygon and contains a hole in the middle that needs to be cut out.

Polygon-Kyrgyzstan-Hole-Example

Taking “Armenia” as another example, this region is of type MultiPolygon and also has inner areas that must be subtracted as holes.

MultiPolygon-Armenia-Hole-Example

Final Result & Demo



P4V


2025年11月14日星期五

Tech Art - Week13 - Final Project Description

Part I - Exceptions and JSON Input



This assignment is fairly straightforward. First, I need to map out the overall workflow (as shown in the figure), and then derive an accurate naming-format JSON from the UE5 Style Guide on GitHub. Since AI is a great tool, I just gave the link to GPT and had it compile the JSON file for me—let the AI handle the tedious grunt work to maximize efficiency. And also, to the custom rules, 
 It really should write back to the original JSON file, but for convenience during testing I wrote it to a new file insteadAlso, since this assignment doesn't actually involve Houdini's API—it's all just Python—I wrote it directly in a PyCharm project (so the JSON file path is in the same directory as the code; I didn’t use an absolute path).

Json File






Part II. - Final Project Description


Project "What"

Integrate my self-hosted ComfyUI + Hunyuan3D into Houdini 19.5 to build a fast asset iteration pipeline: one-click generation → auto import → auto material hookup. In Houdini, a PySide2 panel lets me enter prompts or drop a reference image to remotely trigger a ComfyUI workflow that outputs GLB models and PNG textures, then automatically builds the network, creates materials, wires textures, and places the asset in the scene for concept and greybox stages. Remote invocation is supported: ComfyUI runs on a home workstation; the client pulls results over the network and adapts them in Houdini.

Project "Why"

  • Faster ideation: compress idea to visual placeholder into minutes for greyboxing, previs, and style exploration.

  • Less repetitive work: auto naming, auto materials, auto folder layout; no manual import or texture wiring.

  • Reuse remote compute: heavy models run on a powerful home PC while the local machine handles interaction and import.

  • Scalable: can extend to half-body → Z-remesh, batch placeholder props, or temporary bridges to Unreal.

Project "How"

Implement Plan

  1. System Flow (high level)

  1. Prepare img2img, txt2img, and text-to-3D workflows on the ComfyUI side (Hunyuan3D 2.x).

  2. Submit jobs from the Houdini panel; ComfyUI queues and executes them.

  3. When complete, the client downloads GLB/OBJ and textures.

  4. Houdini automates: import model, create materials, wire textures, and place into the scene.

  5. Save assets to the project folder.

  1. Remote Networking

  • Preferred: ZeroTier or Tailscale (private networking without router port-forwarding).

  1. Technical Details and Challenges

  • GLB texture embedding: if ComfyUI’s SaveGLB does not embed textures, either output OBJ/FBX with separate textures and rebuild materials in Houdini, or post-process glTF to reference textures.

  • Scale and coordinate system: normalize units (meters/centimeters) and adjust the Up Axis during import.

  • Stability: timeouts, aborted jobs, and JSON parse errors should surface in the UI with retry options.

  • Version differences: support both Houdini 19.5 (PySide2 / Python 3.9) and Houdini 20.5 (PySide2 / Python 3.11).


Reference :
https://zhuanlan.zhihu.com/p/270427440
https://www.youtube.com/watch?v=va8Jkc7o9d4&t=11s
https://github.com/comfyanonymous/ComfyUI/tree/master/script_examples
https://github.com/Tencent-Hunyuan/Hunyuan3D-2.1
https://www.youtube.com/watch?v=MpVsQG6-FM0&t=568s
https://www.bilibili.com/video/BV1Ew411776J/?spm_id_from=333.1387.homepage.video_card.click&vd_source=6375d819a70068bc1dc42d8b1361e0ac

2025年11月12日星期三

Common Art - Week 12 - First Bake - 'B' Stage

 

Stylized Sand & Fix Issues





This time, my main tasks were creating a Stylized Sand Terrain Texture and helping teammates resolve various technical issues—for example, sub-level streaming problems in Unreal. In collaborative project management, lots of small, unexpected issues always pop up, and as Tech Artists we need the problem-solving skills to keep the team moving forward. Overall, our team is making steady progress!


P4V ScreenShot




2025年11月7日星期五

Tech Art - Week12 - Class Hierarchy



Analysis

As usual, let’s start with a quick review of the assignment.


This week’s task is fairly straightforward with no major logical hurdles. The main thing to watch out for is the class hierarchy. I sketched the minimal layout as follows: the CSG class goes in CSGModule.py, while the Box and Tube classes go in ShapeModule.py.


One important detail: we use variables t, r, s, and c to store Position, Rotation, Scale, and Color respectively. When you actually modify them, make sure you are updating the node’s internal parameters. This is where __setattr__ helps a lot and greatly simplifies the code. Since __setattr__ is invoked whenever an attribute is set, we just detect changes to t, r, s, and c and then forward those changes to the node’s parameters.


Because this week’s assignment asks you to split classes across different files and call them from a main file, you can simply place those modules into Houdini’s Python library so they can be imported directly.

Example (Houdini 20.5.613)

  1. Find Houdini’s Start Menu folder:

C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Side Effects Software\Houdini 20.5.613
  1. Go to the Python libs directory:

C:\Program Files\Side Effects Software\Houdini 20.5.613\houdini\python3.11libs
  1. Put the files there:

CSGModule.py ShapeModule.py
  1. In your Houdini Python script, import the classes:

from ShapeModule import *

Note: I didn’t import CSGModule.py explicitly here because it is already imported inside ShapeModule.py, so there’s no need to import it again.