MicroCloud Hologram Inc. announces optimization of stacked sparse autoencoders through DeepSeek model

In This Article:

SHENZHEN, China, Feb. 14, 2025 /PRNewswire/ -- MicroCloud Hologram Inc. (NASDAQ: HOLO), ("HOLO" or the "Company"), a technology service provider, they Announced the deep optimization of stacked sparse autoencoders through the DeepSeek open-source model, injecting new vitality into anomaly detection technology and providing an efficient solution.

Data quality is crucial for model performance, so the behavioral data collected in the data preprocessing stage typically contains multiple features with different dimensions and numerical ranges. In order to eliminate the dimensional influence between different features and improve the effectiveness of model training, HOLO uses normalization processing method.

Normalization is a common data preprocessing technique that scales the data to a specific range, typically between 0 and 1 or -1 and 1. By doing so, data from different features can be compared and analyzed on the same scale, avoiding the situation where certain features dominate model training due to their large value ranges. In HOLO's detection project, normalization not only improved the efficiency of model training but also laid a solid foundation for subsequent feature extraction. The data processed through normalization is more aligned with the input requirements of deep learning models, enable the model to learn intrinsic patterns more accurately.

After the data preprocessing is completed, the next step is to input the processed data into the stacked sparse autoencoder model. The stacked sparse autoencoder is a powerful deep learning architecture composed of multiple autoencoder layers, with each layer responsible for extracting features at different levels. HOLO utilizes the DeepSeek model to dynamically adjust the strength and manner of the sparsity constraint, ensuring that the features learned by each layer of the autoencoder are sparse and representative. By appropriately setting the sparsity constraint, the model can better capture key information in the data and reduce redundant features. An autoencoder is an unsupervised learning model designed to encode input data into a lower-dimensional feature representation through the encoder, and then reconstruct the original input data as accurately as possible through the decoder. Between the encoder and decoder, the autoencoder learns the feature representation of the data through a hidden layer.

HOLO has innovated and optimized the stacked sparse autoencoder by utilizing the DeepSeek model. This technique employs a greedy, layer-wise training approach, optimizing the parameters of each autoencoder layer step by step. The core of this layered training strategy is to first train the lower layers of the autoencoder to learn the basic features of the input data, then use the output of the lower-layer autoencoder as the input for the next layer, continuing training and progressively extracting deeper features. In this way, the model is able to gradually capture the complex relationships within the data, enhancing its expressive power. Each layer of the autoencoder is constrained by sparsity, ensuring that the learned features are sparse, meaning that only a few neurons are activated, allowing the model to learn more compact and effective feature representations.