About this project
This project is a personal experiment created out of curiosity. The main part of the code was generated by an AI assistant, and my task was to set the goal, prepare the data, run the training and evaluate the result. The model is trained to remove artifacts from images (JPEG, noise) and even shows good results.
Artifacts Remover UNet
This is a lightweight UNet-based model trained to remove JPEG compression artifacts and additive Gaussian noise from images. The model is ideal for integration into image processing pipelines, including the popular ComfyUI framework.
Examples
How to use in ComfyUI
Step 1: Install the Node
Open a terminal or command prompt.
Navigate to your ComfyUI
custom_nodesdirectory.# Example for Windows cd D:\ComfyUI\custom_nodes\ # Example for Linux cd ~/ComfyUI/custom_nodes/Clone this repository into the
custom_nodesfolder:git clone https://github.com/SnJake/SnJake_JPG_Artifacts_Noise_Cleaner.gitInstall Dependencies: Now, you need to install the required Python packages. The command depends on which version of ComfyUI you are using.
For standard ComfyUI installations (with venv):
- Make sure your ComfyUI virtual environment (
venv) is activated. - Navigate into the new node directory and install the requirements:
cd SnJake_JPG_Artifacts_Noise_Cleaner pip install -r requirements.txt
- Make sure your ComfyUI virtual environment (
For Portable ComfyUI installations:
- Navigate back to the root of your portable ComfyUI directory (e.g.,
D:\ComfyUI_windows_portable). - Run the following command to use the embedded Python to install the requirements. Do not activate any venv.
python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\SnJake_JPG_Artifacts_Noise_Cleaner\requirements.txt
- Navigate back to the root of your portable ComfyUI directory (e.g.,
Step 2: Install the Model Weights (Or you can skip this step, the node will download the model weights itself after starting the Queue)
- Navigate to your
ComfyUI/models/directory. - Create a folder named
artifacts_removerinside it, if it doesn't already exist. - Download the model weights file (
.ptor.safetensors). - Place the downloaded weights file into the
ComfyUI/models/artifacts_remover/directory.
Step 3: Restart
Restart ComfyUI completely. The new node will be available in the "Add Node" menu.
Training Details
The model was trained on a dataset of approximately 30,000 high-quality images, primarily consisting of anime-style art. Instead of using pre-degraded images, the training process generated (degraded, clean) image pairs on-the-fly.
Architecture: The network is a
UNetRestorerbuilt withResidualBlocks for deep feature extraction. To enhance important features, the deeper levels of the encoder utilize the Convolutional Block Attention Module (CBAM). The model employs a final residual connection, learning to predict the difference (clean - degraded) rather than the entire clean image.Degradation Process: Each clean image patch was subjected to a sequence of randomly ordered degradations:
- JPEG Compression: A random quality level was chosen between 5 and 85.
- Gaussian Noise: Gaussian noise was added with a standard deviation randomly selected from the range [0.0, 7.0].
- Identity Mapping: With a 20% probability (
--clean-prob 0.2), the input image was left clean (not degraded). This encourages the model to preserve details when no artifacts are present.
Training Procedure:
- Optimizer: AdamW with a learning rate of
2e-4and weight decay of1e-4. - Learning Rate Scheduler: A Cosine Annealing scheduler with a linear warmup phase of 2000 steps was used.
- Batch & Patch Size: The model was trained with a batch size of 12 using 320x320 pixel patches.
- Loss Function: A comprehensive, multi-component loss function was employed to balance pixel accuracy, structural integrity, and perceptual quality:
- Primary Loss: A weighted sum of
0.7 * CharbonnierLoss(a smooth L1 variant) and0.3 * MixL1SSIM. TheMixL1SSIMcomponent itself was weighted withalpha=0.9, combining L1 loss and a structural similarity term (0.9*L1 + 0.1*(1-SSIM)). - Edge Loss:
GradientLosswas added with a weight of 0.15 (--edge-loss-w 0.15) to penalize blurry edges and promote sharpness. - High-Frequency Error Norm (HFEN): To better preserve fine textures and details,
HFENLosswas included with a weight of 0.12 (--hfen-w 0.12). - Identity Loss: For the 20% of samples where the input was clean, an additional L1 loss with a weight of 0.5 (
--id-loss-w 0.5) was calculated between the model's output and the input. This forces the network to act as an identity function for high-quality images, preventing it from introducing blur or altering details.
- Primary Loss: A weighted sum of
- Techniques: Training was accelerated using Automatic Mixed Precision (AMP) with the
bfloat16data type. An Exponential Moving Average (EMA) of the model's weights (decay=0.999) was maintained to produce a more stable and generalized final model for inference.
- Optimizer: AdamW with a learning rate of
Limitations and Potential Issues
- The model was trained on primarily consisting of anime-style art. Results on photos, line art, or text may be suboptimal.
- With very high levels of noise or artifacts beyond the training range, the model may hallucinate details or over-smooth the image.
- The model might interpret very fine, low-contrast textures (e.g., fabric, sand) as noise and smooth them out. For such cases, use the
blendparameter in the node to mix back some of the original detail. - The model does not correct for other types of degradation, such as motion blur, chromatic aberrations, or optical flaws.
- Downloads last month
- 3


