NNeural Networks Engineering

Neural Networks Engineering

@neural_network_engineering💻 Технологии🇬🇧 English📅 март 2026 г.

Authored channel about neural networks development and machine learning mastering. Experiments, tool reviews, personal researches. #deep_learning #NLP Author @generall93

📊 Полная статистика📝 Все посты
##39
2.5K
Подписчики
5.3K
Ср. охват
216.2%
Вовлечённость
19
Постов
В день

Графики

📊 Средний охват постов

📉 ERR % по дням

📋 Публикации по дням

📎 Типы контента

Лучшие публикации

19 из 19
Nneural_network_engineering
neural_network_engineering
31 авг., 14:15

​​How many layers to fine-tune? Model fine-tuning allows you to improve the quality of the pre-trained models with just a fraction of the resources spent on training the original model. But there is a trade-off between the number of layers you tune and the precision you get. Using fewer layers allows for faster training with a larger batch size, while more layers increase the model's capacity. We've done experiments so you can make more educated choices. Highlights: - Training only the head of a...

👁 8.5K
Nneural_network_engineering
neural_network_engineering
29 июн., 13:46

One of the main features of the framework is caching. It allows you to infer large models only once and then use cached vectors during the training. It speeds up the process x100 times, simultaneously allowing you to use batch sizes that are unattainable in other ways. (gif)

👁 7.8K🎬 video
Nneural_network_engineering
neural_network_engineering
25 мар., 13:24

​​Triplet loss - Advanced Intro Loss functions in metric learning are all chasing the same goal - to make positive pairs closer and negative further. But the way they achieve this leads to different results and different side effects. In today's post, we describe the differences between Triplet and Contrastive loss, why the use of Triplet loss can give an advantage, especially in the context of fine-tuning. It also covers the approach to an efficient implementation of batch-all triplet mining.

👁 7.3K
Nneural_network_engineering
neural_network_engineering
29 июн., 13:45

Similarity Learning lacks a framework. So we built one. Many general-purpose frameworks allow you to train Computer Vision or NLP tasks quickly. However, Similarity Learning has peculiarities, which usually require an additional layer of complexity on top of the usual pipelines. So, for example, the batch size in the training of similarity models has a much greater role than in other models. Labels either do not exist or are handled in a completely different way. In many cases, the model is alre...

👁 7.0K
Nneural_network_engineering
neural_network_engineering
4 мая, 14:40

Metric Learning for Anomaly Detection Anomaly detection is one of those tasks to which it is challenging to apply classical ML methods directly. The balancing of normal and abnormal examples and the internal inconsistency of anomalies make classifier training a challenging task. And the difficulty is often related to data labeling, which in the case of anomalies may not be trivial. The metric learning approach avoids the explicit separation into classes while combining the advantage of modeling ...

👁 6.4K
Nneural_network_engineering
neural_network_engineering
15 дек., 14:29

For those who reacted with 🦀 on a previous post. I wrote a Twitter thread on how I am building Qdrant with Rust. It is on Twitter because the development is still in progress, and I would like to tell you about some interesting details without a special blog-post. Some topics of the thread: - How Qdrant is useful? - How it stores data and build indexes? - How to keep data always available for search? - How do I auto-generate documentation in Rust? Your comments are welcome here and on Twitter!

👁 6.2K
Nneural_network_engineering
neural_network_engineering
15 сент., 13:08

​​ODS.ai Summer of Code results Hi everyone, ODS SoC has officially finished in the last week, and it is time to present the results. First of all, the winner of the Metric Learning track Tatiana Grechishcheva has published a detailed article on her work of fine-tuning and deploying metric learning models. She fine-tuned the ViT model for matching similar clothing and put together a detailed tutorial of how you can deploy such a model to production. An online demo is also included! There are als...

👁 6.1K
Nneural_network_engineering
neural_network_engineering
20 янв., 11:45

Awesome Metric Learning The Metric Learning approach to data science problems is heavily underutilized. There are a lot of academic research papers around it but much fewer practical guides and tutorials. So we decided that we could help people adopt metric learning by collecting related materials in one place. We are publishing a curated list of awesome practical metric learning tools, libraries, and materials - https://github.com/qdrant/awesome-metric-learning! This collection aims to put toge...

👁 6.0K
Nneural_network_engineering
neural_network_engineering
8 авг., 14:34

Vector Similaruty beyond Search Vector similarity offers a range of powerful functions that go far beyond those available in traditional full-text search engines and the conventional kNN search. We just scratched the surface of the topic but already found a lot of new ways to interact with the data, including: - Dissimilarity search - that can be applied to anomaly detection, mislabeling detection, and data cleaning. - Diversity search - that can be used for giving a better overview of the data,...

👁 6.0K
Nneural_network_engineering
neural_network_engineering
10 июн., 10:58

Neural Search Step-by-Step We made a tutorial on Semantic Embeddings and Neural Search. With this guide, you will build your own semantic search service from scratch. You won't need any complicated training of the neural network. Moreover, you can do all preparation steps in the Google Colab notebook. Tutorial includes: - What is the Neural Search? - Getting embeddings from BERT Encoder - Using vector search engine Qdrant - Creating an API server with FastAPI. If you want to learn how to build p...

👁 5.9K

Типы хуков

Нейтральный16 | 5.3K просм.
Статистика2 | 4.1K просм.
Вопрос1 | 8.5K просм.

Длина постов

Длинные (500-1000)12 | 5.8K просм.
Очень длинные (1000+)3 | 2.7K просм.
Короткие (<200)2 | 5.1K просм.
Средние (200-500)2 | 6.5K просм.

Влияние эмодзи

6.1K
С эмодзи (2)
5.2K
Без эмодзи (17)
+16.9% охвата

Типы контента

🎬
1
video
7.8K просм.
📝
18
text
5.2K просм.
Neural Networks Engineering (@neural_network_engineering) — Telegram-канал | PostSniper