Europe Union

Our client

The company is a startup founded by veterinary experts who wanted to create smart solutions for animal healthcare.

They aimed to create wearables and complete products dedicated to professional veterinarian caregivers like animal clinics, hospitals, etc.


We were tasked with taking care of the project as an interim product owner. We needed to create a viable product that could be tested. It involved both software and hardware aspects.

We were initially tasked with creating a machine algorithm for the device. After conducting discovery workshops, our team proposed a new roadmap for the product.


The team of experts proposed a deep-learning approach for the initial classification and assessment of the severity of the illness. The data could be later extracted and used to monitor the disease further. 

After identifying the necessary steps to clean the provided data, we employed deep learning models and a deep convolutional neural network based on raw data for pattern recognition. 

Applied technologies:

  • Discovery and analysis
  • Product roadmap
  • Data analysis
  • Deep neural network
  • AI modelling
  • Hardware advisory
How to distinguish OA disease among dogs using AI

Veterinary experts with a mission and need for technology assistance



The startup was founded by veterinary experts with a plan to support professionals and veterinary clinics with technological solutions. They planned to develop an end-to-end solution that would help diagnose osteoarthritis disease among dogs.


The device’s design had to be compact and light enough to attach to the dog’s collar easily. It would then record the movements while walking or running to pinpoint indicators that could designate potential symptoms of OA, alerting the owner to look for further diagnosis. With the vast domain knowledge, our client needed technology experts to complete the product’s technical part.

The first significant challenge involved a tight budget and time. The company has already tried cooperating with other tech companies. However, the results could have been better. Thus, a lot of resources have been wasted. They needed a partner to help progress with the product to be viable enough for testing if it could work and be accurate enough to invest in developing the final product.

Check out the article on top tech trends in the coming years.

See the trends

Cleaning the data for reliable results

Before developing the algorithm, our team (also comprising PhD-level specialists) needed to evaluate the available data collected. Upon examining it, our team noticed that the data wasn’t correctly annotated. There was also an issue with the logistic regression model and overall technical trouble with computer-aided detection and data collection from the gyroscope. 

new web platform

We’ve decided to employ deep learning models and deep convolutional neural networks for imaging data to recognise patterns based on collected raw data. We wanted to prove the universality of solutions of the deep learning model for different breeds. At first, we relied on the previously collected training data provided to us by data scientists.

Our solution involved the development of the right processing artificial intelligence algorithms and AI models of the data from accelerometers and gyroscopes, then creating classification algorithms that operate on the collected data sets and adjusting the software/hardware interfacing to obtain stable results.

Machine learning as the optimal approach to success


Machine learning and artificial intelligence were at the heart of the project. Our task was to conduct data analysis based on the provided data sets and input data, with the help of computing resources, artificial intelligence and machine learning model, with measurements of dog movements and to distinguish between dogs with and without OA based on the collected sensor’s raw data.

Based on the training data provided by data scientists, we started by monitoring dogs’ activity over a long period due to the importance of high precision of deep machine learning and digital imaging. After analysing different data points and input and output variables, we aimed to introduce significant improvements to the solutions regarding logistic regression models, machine learning and AI models. We sought discriminative features and data points crucial for determining dogs’ illnesses. Based on the results, using analytical methods, we were able to determine whether the dog had OA.

Our team proposed a deep neural network machine learning algorithm to increase the level of classified OA severity and the diagnostic accuracy of the device and AI models. The results of our efforts allowed us to sort through all the data and extract the deep features. We implemented artificial intelligence solutions and deep neural networks to distinguish between dogs with OA symptoms. We proposed a high-quality deep-learning model and other deep-learning technologies to generate results that could work with more data in the future.

The milestones of the process

  • Discovery and analysis meetings with the client. It helped us understand their context and ultimate goals.
  • Auditing the work and results of the previous partners.
  • Assessing the feasibility of the product.
  • Planning the roadmap of the process and development
  • Proposing a methodology for creating a viable product
  • Developing and verifying the algorithm
  • Drafting the blueprints for large-scale testing

Critical process outtakes:

  • Cleaning the provided data to make work on an optimal solution easier
  • Proving the product feasibility to ensure good prospects for further development
  • Outlining a solid methodology to effectively boost TRL


Our experts and the client were happy with the results achieved during our cooperation. We were able to extract all the necessary data and implement further measures. The speed of the delivery and the quality of provided solutions left the product in a state that allowed for further testing and development of its final stages. Our team built a solid and reliable foundation for the company to proceed with the development upon receiving funds.

Important figures

  • 180h – the time it took to complete the requested part of the project (as opposed to the estimated 240h)
  • 80% – the average level of accuracy the model displayed


Estimate your project.

Just leave your email address and we’ll be in touch soon
ornament ornament

About the client

  • Name: Swiftly 
  • Line of business: Automated Recruiting and Unbiased Recruitment tools
  • Founding year: 2020
  • Country: Sweden

Problem overview

Swiftly, a Stockholm-based startup, grappled with two significant challenges within their job portal. Firstly, accurate categorization of job listings posed difficulties, leading to suboptimal user experiences and ineffective job matching. Secondly, the manual job application process was time-consuming and resource-intensive, restricting scalability.

Proposed solution

Our approach comprised two pivotal components:

  • Web Scraping Tool: We developed a sophisticated web scraping tool to extract precise keywords from job listings, enhancing categorization accuracy.
  • SOTA Presentation: We created a visionary state-of-the-art (SOTA) presentation, demonstrating automated field auto-fill capabilities to streamline the application process.

Applied technologies:

  • Python, Selenium and FastAPI were used to implement a service able to scrap form fields from a given website, and to fill automatically the forms once the data are provided
  • Neo4J and PostgreSQL were databases used for storing graph data describing relations between job offers, job seekers and other data which can be used to look for mutual associations, as well as more general and structured metadata of job offers.
  • Sklearn was used to implement a recommendation engine looking for best matches between job seekers and job offers.

Pre-existing Challenges:  

Before implementing the SOTA and POC solutions, Swiftly faced several challenges:
  • Inaccurate Categorization: Swiftly encountered difficulties in accurately categorizing job listings, causing mismatched job offers and candidates.
  • Manual Application Process: Manual application processes consumed time and resources, impeding scalability.
  • Insufficient Automation: The absence of automated keyword extraction led to imprecise job listing categorization.
  • Scaling Issues: Manual processes and categorization limitations hindered scalability.
  • Lacking Technological Strategy: Swiftly lacked a comprehensive technology-based strategy to enhance categorization accuracy and streamline processes.

Implementation Approach.

Our implementation strategy followed these steps:
Initial Talks and Kickoff

Collaborative discussions between Swiftly’s leadership and’s technical team laid the groundwork for a productive partnership, aligning expectations and goals.

Team Composition

An 8-person team, comprising ML Engineers, Embedded Systems Engineers, Data Scientists, and Fullstack Developers, came together to tackle the project.

Agile Collaboration

Daily stand-up meetings and ongoing communication facilitated iterative development and enhancements.

Results and Impact:

The project concluded with the creation of an advanced SOTA solution that effectively addressed Swiftly’s challenges. This solution improved job listing categorization precision and streamlined the application process. The SOTA also offered a proof-of-concept for refining job listing keywords and automating application field population.


Swiftly’s collaboration with resulted in the successful resolution of their job portal challenges through the implementation of innovative automation solutions. The web scraping tool and the SOTA presentation highlighted the potential of technology to enhance processes, elevate user experiences, and pave the way for future enhancements.


Key numbers

  • Project Duration: Successfully completed within 16 days!

Estimate your project!

Let’s revolutionize your customer experience together. Get in touch today!
ornament ornament

Check other case studies:

Our Client

  • Name: MuuMap
  • Line of business: Software for the dairying industry
  • Country: Poland




The client noticed that the dairy industry was heavily reliant on traditional practices, leading to inefficiencies. Recognizing the opportunity for improvement, the client aimed to create a digital system that could streamline milk delivery management and procurement processes.

Solution’s team collaborated closely with the client to create a comprehensive end-to-end product. We decided together on the features and improved the concept collaboratively.

Which technologies have we applied:

The team and our approach to the project.  

The team comprised a Product Owner, 3 backend developers, 2 frontend developers, a UI/UX designer, and 1 tester. As the project required, the team size was adjusted accordingly, growing when new functionalities were needed and scaling down during lower business demand. 

To accommodate the complex and evolving nature of the system, we opted to work in a Time and Materials model. This approach provided the necessary flexibility and responsiveness to adapt to the project’s changing needs over time.

The begin of the journey

  • It all started with a navigation tool for the drivers. The client noticed that new drivers faced challenges navigating through the 3000 dairy farms. They needed details about accessing area premises, locating milk tanks, and gate openings. Typically, new drivers spent two months riding along with experienced drivers to learn the routes, and even then, they would call dispatchers or other drivers for directions to specific farms.
  • To address this, we collaborated with the client and developed a solution. We placed location pins on the map for each farm’s milk cooling station and, if needed, “drew” a new road. This helped drivers get precise route plans, accurate directions, and essential information about the yards they visited.
  • The success of this application reduced driver training time from two months to just 2-3 days.
  • Later, the Manager module was created, serving as a massive CRM system. It holds vast amounts of information about farmers, their milk deliveries, drivers, license expiration dates, available fleet, subcontractors, destination points, and daily production demand. This comprehensive tool provides an all-encompassing overview of the entire dairy operation.
Review Quote
We’ve been collaborating with for many years, and truth be told, we co-created MuuMap together. The success we’ve achieved is undoubtedly a result of this partnership. is a trusted partner and our number one choice.
Adam Strużyński
Product Manager of MuuMap

MuuBox – an answer to how to optimize a delivery process


As we continued to enhance the system, the client recognized the need to optimize the milk delivery process further. To achieve this, our team introduced automatic route planning algorithms, which revolutionized how routes were planned for the drivers. Instead of relying on manual decision-making, the system could generate the most efficient routes based on various factors like delivery locations, vehicle capacity, and traffic conditions. This saved time, reduced fuel consumption, and improved overall operational efficiency.


Additionally, we sought to digitize and simplify the process of documenting milk quantities at each farm. To accomplish this, we developed electronic devices called MuuBox. These devices were installed in milk tankers and are responsible for uploading data in real time to the MuuMap system.


Previously, the milk collection process involved a lot of manual work for the drivers. They either had to use handwritten protocols to record the quantity of milk collected at each farm or print bulk receipts from their route, which had to be manually entered into the computer. This manual data entry was a time-consuming and labor-intensive task. For instance, entering 3000 positions manually required significant effort.

However, with the implementation of our solution, this manual process was digitized and streamlined. The collected pick ups data is now automatically aggregated in our system and allows integration with others. This automation significantly reduced the need for manual data entry, resulting in fewer errors than the previous approach.


By automating the data entry process, MuuBox significantly reduced the administrative burden on both drivers and the milk collection department. Instead of being the source of mistakes, the department can now focus on correcting errors.

See how we adapted IoT in the MuuMap system.

Visit the website

We are aware of different customer needs

For those MuuMap clients who preferred not to use the MuuBox devices, we introduced a manual entry functionality directly on their tablets. With this option, they could input the milk quantities manually. The advantage of this approach was that it eliminated the need for paperwork while ensuring accurate data recording. Just like with the MuuBox integration, the collected data from the manual entries were uploaded in real-time to the MuuMap system, streamlining the process and ensuring seamless data management.

Review Quote
The product quality is phenomenal, and all of my expectations have been met.
Adam Strużyński
Product Manager of MuuMap

From manual to autonomous milk reception

  • The latest module created is for milk reception at the plant. During inspections, officials present a purchased product with a QR code, requesting documentation for that product. That’s when a manual paperwork process begins.
  • The quality control department employee needs to search for the delivery from that day and then look for the specific delivery to that particular tank. Only then can they find the precise time the milk from different routes was collected in that tank. This process can be time-consuming and prone to errors due to the manual nature of the documentation.
  • Now, with the newest module, MuuMap continues to be involved in the process beyond milk delivery to the gate. The customer supports weighing the truck upon entry, documenting laboratory tests, and recording the destination tank where the milk is pumped or stored.
  • After the truck leaves the plant, it is weighed again, providing valuable information on the actual milk quantity received compared to the declared amount. Based on this data, MuuMap’s application generates a digitized route report, allowing easy traceability of the milk’s journey, including the specific day, routes, and suppliers contributing to each tank.

The results of the dairy revolution

Thanks to the application, the client became a pioneer in the market by offering a tool specifically designed for the traditional dairy industry. The unique and efficient solution attracted the first customers, who loved it and spread positive reviews. As a result, the client’s reputation grew, and more and more people started using the application. Eventually, it captured a significant portion of the Polish dairy market, securing a dominant position with a 30% market share.


27600 Farmers

677 Road Tankers

30 Dairy Plants

1175 Drivers

650 Devices

Over 5 billion liters

34,30% in Poland
3,48 % in Europe

Estimate your project!

Let’s revolutionize your customer experience together. Get in touch today!
ornament ornament

How we helped patients communicate with their caregivers in an emergency

Who did we work with?

Our client is a representative of the Nursing and residential care industry, a part of the MedTech sector. 

They work on different solutions that may help the caregivers keep an eye on their patients, in case an emergency happens outside their working hours.

Their most important goal is to ensure that their patients can communicate them effortlessly in any situation that would require immediate assistance.

What was the challenge?

Due to a long prototyping phase, the company had a working product. However, the length of the prototyping deemed some of the components had gone out of date and required replacements or a rework of the infrastructure. The outbreak of the Covid-19 pandemic made some of the parts hard to access, especially the semiconductors, causing turmoil in the market. 

With limited documentation and resources, our team needed to work out a solution that would compromise on some resources and accommodate the end users’ specific needs.

How did we want to tackle it?

With the limited documentation and outdated documents, our team had to work out a viable solution to fit the budget.

After detangling the code and failed architecture elements from the working devices, the experts proposed redesigning the product to work with the new components that needed replacement.

It required a combination of software and hardware to create a working hub to integrate the devices into the system.

Technologies that helped us

  • C
  • C#
  • C++
  • Microservices
  • Bluetooth Low Energy (BLE)
  • Microsoft .NET
  • Microsoft Azure
  • Kubernetes Integration

What was our client’s goal and the challenge they faced before reaching us?


Our partner, a MedTech company specialising in implementing innovative solutions for nursing homes and in-home patient care needed an easy-to-use device to allow patients to communicate with their caregivers in an emergency. It resulted from the fact that the caregivers in some countries work only during business weekdays, and patients living alone in their own homes needed an infallible means to communicate in urgent need.


The device needed to be very easy to use, as senior patients often aren’t tech-savvy enough to work with more complex technology. It had to accommodate the key functions and enclose them in several buttons and clear displays.


The product went through a long, six-year prototyping phase and at the end, some of the components were outdated and required a replacement or different infrastructure to work properly. They needed hardware and software experts to assemble the elements into a working IoT network hub.


Our client needed expertise and a competent team in both hardware and software to help implement the required functions and the device connectivity to the system. They met our experts, and the project started in 2021.

How did we kick off the work?

After meeting the client and familiarising ourselves with them, our team had many challenges to face so that they could outline an optimal solution.

Due to the long prototyping phase and the need to change some key components, our team deemed the existing architecture and software insufficient. They also had to deal with limited documentation regarding the devices and their infrastructure.

After navigating the available resources and consulting the elements with the lead developer and the CEO to lay out the problem, the team pointed out that the infrastructure needed a rework due to the outdated components and parts of the code.

Who were the experts behind the project?

Our team comprised a project coordinator, delivery manager, tech lead, BLE expert and an electronic engineer to ensure software and hardware implementation. They were supported by the company’s lead developer and CEO, who actively participated in the development and helped fill in the gaps whenever necessary.


See our take on the role of IoT in MedTech.

Read the article

What did they do to solve the issue?

To deliver a device that could be a part of the bigger IoT network, the team must manage and organise their work based on the available documentation and expertise of the company’s lead developer.

After untangling the issues with new components, existing architecture and code, they proposed redesigning the architecture and rewriting the code, as it was easier than trying to work out how to incorporate the existing code and infrastructure into new components.

They also implemented the necessary features into an easy “press-and-play” device that would pose no problem for end users. The BLE technology proved useful in helping manage battery life optimisation.


What were the key steps of the project?


Assessing the state of the current software and architecture and how it can work with the device

Redesigning the product to work with new components

Designing new firmware to work with the existing devices

Code analysis to make it compatible with new components

Extending the battery working time from the initial 12h using one battery

Implementing energy management optimisation

Ensuring that the software and hardware comply with the EU MedTech norms

New optimisation and code base

BLE implementation

See how we helped create a health wearable for dogs.

View the case study

What did we achieve?

During different project stages, we achieved several goals:
  • Making a solid foundation for the end product – an IoT-based communication hub
  • Simple control with only a few buttons allows for easy use, removing a technological barrier for its users
  • Integrating wireless connectivity technologies, including dual SIM, WiFi, and Bluetooth
  • Making changes to the product architecture, replacing the missing components with new parts that were more available on the market
  • Ensuring the hub can connect with up to 64 devices (wireless sensors)
  • Designing new architecture for seamless connectivity
  • Increasing the battery life to reliable 48h and continuous support for the product

The cooperation ended in November 2022, and the client was satisfied with the results. We created a solid foundation for the product that is currently available on the market. Our continuous efforts to improve and enhance the product helped the company to reduce the market release time. Throughout the collaboration, the product has been continuously supported by our developers.

Review Quote
I can confirm that we have been highly satisfied with the cooperation. We’ve witnessed the team display the required skillset. Together with we’ve managed to build an extended R&D team on a taskforce mode, which seems very efficient.
CEO of the Company

Estimate your project.

Just leave your email address and we’ll be in touch soon
ornament ornament


Enelion is a manufacturer of electric car chargers and ecosystem management software for electromobility. The company has been designing electronics and manufacturing equipment in Poland since 2016 and has delivered several thousand chargers to customers at home and abroad. Enelion is also developing charger network management software to provide operator and charging service providers. In addition to foreign customers, Enelion cooperates in the Polish market with PGE, Tauron, Energa, Polenergia, and Greenway.
Experience we shared
Efficient Systems Data management
Software integration

Customers’ business goals

The simple provision of chargers to tenants and billing of energy consumed in the administration system.

Optimal use of available power in the building.

Protection against network overload in an office building or parking lot.


A search for users optimization

Here we have used PostgreSQL algorithms (ltree) for representing labels of data stored in a hierarchical tree-like structure.

Closest stations search optimization

We have employed a PostGIS, a spatial database extender for PostgreSQL object-relational database. It supports geographic objects allowing location queries to be run in SQL.

Communication between applications and queuing

We have deployed an open-source message broker, RabbitMQ, which can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements.

Hardware management integration between apps

The goal was to provide remote access to the status of the charger but also to allow users and end-users to control the charger, e.g., switch it on/off remotely. Both groups are using different apps to perform these activities. Our team accessed the chargers software backend and conducted the integration from that level.

Process software development team has been working together with the client’s team. The work has been aligned with the scrum methodology.

Delivered value:

  • A search of charging stations,
  • Chargers booking,
  • Payments monitoring,
  • Integration of the platform with end users’ mobile app,
  • Remote control over the charger stations (start/end, status),
  • Users division (Operators & Charging Service Providers) and access level control.

The system allows dividing the network into smaller operators, who will only have access to their devices. Charging Service Providers can check the status of the charging station. Thanks to the connection with the Enelink system, most of the maintenance activities will be performed remotely. Another feature was setting up a charging plan that will limit the station’s power at the right time if the Service provider chooses several Operators. All charging stations can be labeled. This makes it easier for Service Provider to manage the stations from a given label. Then, information about the stations’ availability can be easily shared with end-users in a few clicks.

Dynamic Tariff solution gives an attractive offer for each end-user, encourages them to charge in specific places, and introduces discount coupons and VIP programs. Entering tariffs helps Service Providers optimize earnings at charging stations.

Used Technologies:

rabbit mq
Flask Framework

Are you interested in solutions for electromobility?

Just leave your email address and we’ll be in touch soon
ornament ornament


The biggest dairy company in Poland, which is hiring over 5000 employees, have 19 manufacturing plants and is selling their products in 159 countries.
Experience we shared.
Data Analytics

Problem & Solution


The Company was looking for improvements in the following areas:

  • Managing transport resources
  • Monitoring and transport control
  • The navigation system

MuuMap provides support for managing vehicle fleets and transport personnel. The system allows its users to edit maps, supplement them with new roads, and mark temporarily unavailable routes, as well as enter data about the locations of suppliers’ locations and their milk volume. The fleet management module allows the identification of vehicle capacity, the number of compartments, and the option of attaching trailers. Next, the information is used for optimizing and planning milk collection. MuuMap also offers a vehicle registries and facilitates periodic settlements of the costs of fleet maintenance.

Delivered value.

The company reported saving of up to 20% kilometers and 30% total cost and conserved 50% planning time. The system makes it possible to rotate drivers on the routes and it has been lowered by up to 80% the induction training time. The platform allows employees for fast access to all real-time information about transport: fleet time, supplier and recipient characteristic and locations, current stage of realization of the routes, location on the fleet and drivers.


Review Quote
The cooperation with DAC to this date has been very satisfactory and ensures that the MuuMap system (Transportation Management System) is updated on an ongoing basis; we obtain advisory support from specialists of DAC, maintenance services as well as development and adjustment of the system to our needs that continue to expand.
Dariusz Sapiński
President of the Management Board of Mlekovita

Used Technologies.

React JS
AWS CodeDeploy

Estimate your project.

Just leave your email address and we’ll be in touch soon
ornament ornament

Our Client

  • Name: Eldro – The company provides maintenance and installation services in the field of construction and modernization of power installations, telecommunications systems and fiber-optic networks. Eldro deals in the design, implementation and integration of security systems and the implementation of projects in the field of automation.
  • Country: Poland




The company was looking for solutions to the following problems:

  • the need for servicing with many geographically dispersed traffic lights devices,
  • high maintenance readiness costs,
  • the need to meet demanding SLA conditions,
  • high implementation costs in the new service contract,
  • technologically diverse lighting control devices – the need to support various protocols,
  • the obligation to provide a preview of the service process to the infrastructure owner.


We have provided Eldro with a wide range of solutions clustered into the comprehensive and easy to operate platform, including:

  • solution architecture ensuring stable system operation with a massive number of devices,
  • dedicated system of service requests management limiting the time and costs associated with the implementation of requests,
  • auto-classification of service requests by connecting various types of traffic lights controllers,
  • creating a useful interface to review the process of implementation,
  • creating a mobile application for service teams to accelerate and optimize the processing of applications.

Experience we shared

Enterprise Integration

IoT Integration Platform
IoT Integration Platform


Cooperation with Eldro involved creating the system from scratch – from the concept to the implementation of full functionality. Despite the production team, we also appointed a Product Owner, who oversees the process of creating requirements and specification. The project involved creating an interface design. Therefore a UX Specialist was involved. Due to the specificity of the market, the project did not end in a single product but is still subject to development and changes.

Delivered value

The Company share the following improvements has been observed after implementation of our solutions:

The possibility of appearing in new tenders in new areas marketers not yet available.

Reduction of the costs of handling the contract.

Inclusion of costs related to contractual penalties for failure to comply with SLA conditions


Review Clutch’s efforts significantly reduced maintenance costs and potential penalties. Their team worked smoothly, mapping out a clear scope and building out a solid platform. Their knowledge of technology and development skill were highly impressive.

Used Technologies.

WebFlux-Reactive Spring

Estimate your project.

Just leave your email address and we’ll be in touch soon
ornament ornament

Our Client

  • Name: Mlekovita – the biggest dairy company in Poland, which is hiring over 5000 employees, have 19 manufacturing plants and is selling their products in 159 countries.
  • Country: Poland




The Company has been dealing with the following issues:

  • confirmation of the number of litres received from the Manufacturer was entered manually on the PZ print,
  • after the route was completed, drivers delivered prints to the Purchase Department, who had to manually enter over 2,000 collections per day into the billing system,
  • all errors arising in the process of typing and rewriting were verified manually at the end of the month,
  • failure to detect the error resulted in a complaint from the Manufacturer and lack of confidence in the Plant,
  • the employee reporting process was not fraud-resistant.


  • The IoT devices were installed and connected to the RS232 port of the computers supporting the pump
  • The quantity and temperature of milk received was sent directly to the cloud after milk collection
  • The value is published immediately in the Transportation Management System available in real-time to the Customer
  • The values are also sent to the billing system

Experience we shared


Data Analytics


The cooperation started with a detailed audit of the equipment was carried out, which had transporters carrying out the process of milk collection.

  • due to the diversity of the truck’s equipment, a set of different firmware has been prepared that supports various data formats sent via the RS232 port,
  • parsers for transmitting information from devices to the Transportation Management System system and to an external billing system were prepared,
  • algorithms recognizing adverse effects (driver errors, abuse) have been introduced.

Delivered value

The company shared the following improvements after the implementation of our solutions:

The process is fully automatic and does not require the participation of dairy workers, manually completed prints have been eliminated.

One employee from the Purchase department who was able to enter receipts manually was moved.

Drivers’ work has been improved.

Fraud has been limited.

Reporting time has been limited.


Review Quote
The benefits of using the equipment in our tank trucks convince us about the quality of the product, as well as the entire cooperation with the DAC team to date. Thanks to them, it is possible to receive information about purchased milk in real-time, which facilitates both planning the collection and controlling the entire process. We perceive DAC as a strategic partner in the area of optimization of the milk purchase and delivery process. The current cooperation with the company is satisfactory and guarantees us software updates on devices, constant support of specialists and service care.
Szczepan Kostro
Head of the Transport Department at Mlekovita

Used Technologies.

Raspberry PI Zero W
Python 2.7
Python 3.7
Flask Framework
Continuous Integration
Continuous Deployment

Estimate your project.

Just leave your email address and we’ll be in touch soon
ornament ornament

System Prototype Demonstrated in Operational Environment

SIN-On is an onboarding toolchain for IoT Nodes (based on STM32WB55 microcontrollers) designed to be seamless and user-friendly (the user requires NO technical knowledge). It makes the deployment of sensor networks quick and easy.

The toolchain has a management layer in the cloud that is used to manage sensors’ configuration (e.g. sampling frequency). Hence, with some simple operations, the user can decide which sensor has to return its measures and at which frequency. This, in terms of time, is really efficient because the user does not have to “manually” add this information in the configuration files of each sensor and neither re-program the microcontroller to get the actual data.

State of the art

It is estimated that by 2026, over 64 billion devices will be a part of the IoT ecosystem. As the number of IoT devices grows, the need for integrating the new devices into the IoT network poses several challenges. For instance, Derhamy et al. evaluated the existing IoT frameworks against criteria such as security, protocols, rapid application development support, hardware requirements, architectural approach, interoperability, industry support etc. They found that there is a need to develop a more advanced framework and tool that would help businesses to quickly and seamlessly integrate new IoT nodes/devices into their existing system or IoT Infrastructure. Although Derhamy et al. deduced this in 2015, not much progress was done in this domain as confirmed by Paniagua and  Delsing in 2021. 

The solutions which already existed prior to the development of SIN-On were limited by some or other barrier. Some of them were limited by the number of devices that they could onboard, whereas others were either limited by the type of devices they could onboard or some other dependencies such as manufacturing or distribution. For SIN-On the theoretical limit is known and ranges in a few hundreds of sensors per gateway, however the practical limit is a bit lower. There was also a lot of manual effort required from the user, or a specialist was required each time a new node had to be onboarded to the existing system. Compatibility issue was also quite common in most industries. This amounted to higher cost, and lag time, which was not so beneficial for the businesses.

Hence the main motivations for the development of SIN-On were as follows:

  • Reduction of the engineering costs related to the deployment of wireless sensor network
  • Reduction of involvement of specialists/services personnel 
  • Automated handling of devices credentials throughout the toolchain
  • Ability to remotely manage sensors and gateways
  • Smooth and continuous diagnostics
  • Compatibility/Integration with a larger ecosystem achieved through the Arrowhead Framework

The Solution: How does it work?

The credentials of a node (stored in the secure memory of the Hardware Security Module) are scanned through a phone’s NFC, which takes the user to the onboarding web-app. The onboarding app (after logging in) asks for the name of a node, the gateway to which it should be onboarded, and one of the preset configurations to onboard the node. After that, the data are sent to Cloud Management Interface, which passes the configuration to a particular gateway. The gateway starts to look for the unpaired nodes, and once the node is powered up and in range of the gateway, they pair together, the gateway sends the configuration to the node (through GAP GATT REST API), and the node starts providing the data. The data are passed to influxdb and visualized in real-time. 

This onboarding toolchain makes deploying new sensor networks easily manageable, configurable, and secure. It is compatible with your own embedded or cloud solution, where the measurements can be processed on board or via a distributed application.

IoT Node

  • Expose UUID for the onboarding process.
  • Look for field gateways during the operation.
  • Fetch the configuration.
  • Read data from the attached sensors (through one of the physical interfaces).
  • Provide the certificate to prove its identity.
  • Assure two-way communication with the field gateway (to assure that the data have been received).

Field Gateway

  • Aggregate nodes within a single IoT cloud.
  • Provide configuration for the nodes.
  • Exchange data with the management infrastructure (through the Internet, GPRS, NB-IoT).
  • Authenticate new nodes.
  • Monitor the status of the nodes, and access token expiration dates.
  • Provide its identity on demand.

Mgmt Infrastructure

  • Update the list of admitted nodes for particular field gateways.
  • Manage multiple IoT clouds.
  • Dynamically configure nodes.
  • Manage firmware updates.
  • Visualize the data.
  • Serve as a connection to data storage.

Case Studies.

Estimate your project.

Just leave your email address and we’ll be in touch soon
ornament ornament

System Prototype Demonstrated in Operational Environment.

GAP GATT REST API is a tool developed by as a part of the onboarding toolchain developed in the use case 11 (UC11) of Arrowhead Tools: Configuration tool for autonomous provisioning of local clouds. The motivation behind the development of this tool was to incorporate the Bluetooth Low Energy (BLE) based data provisioning in the Service-oriented Architecture (SOA) ecosystem. 

The tool is mainly designed to work on linux-based embedded devices, although it should work also on desktop computers with linux-based systems and bluetooth chips. It consists of two parts – Generic Access Profile (GAP), for the management of BLE connections and their parameters, and Generic Attribute Profile (GATT) for exchanging the data which are described in terms of their functionalities below.

Generic Access Profile: GAP REST API provides endpoints, among others, for: Passive and active scanning of the nodes, connecting to a node GAP is stateless, and supports a gateway operating as central and observer roles. The Hyper Text Transfer Protocol (HTTP) method GET is translated to Bluetooth’s READ, while PUT – to WRITE. Additionally to PUT and GET, the EventSource method is used to handle a stream of notifications and indications.

Generic Attribute Profile: GATT supports, among other, the following functionalities: – Service discovery (as well primary services and filtering by Universal Unique Identity (UUID)) – Characteristics discovery (by UUID as well) – Subscribing to notifications and indications – Reading and writing characteristics – Handling notifications and indication on the server side – Reading/writing descriptors of characteristics

State of the art.

New developments and recent trends in wireless sensor networking technologies have sparked the creation of inexpensive, low-power, multipurpose sensor nodes. Data processing and environment sensing are made possible by sensor nodes. Different surroundings may be monitored with the use of sensors that can detect volatile compounds, temperature, and humidity. They can communicate via networks with other sensor devices and share data with outside users.

The evolution of the Web as we know it has been influenced by the Representational State Transfer (REST) architectural style, which outlines a set of guidelines for the design of networked hypermedia systems. RESTful Web services are web services that adhere to the REST architectural style, and REST APIs are the programmatic interfaces for these services. The architectural decisions made by the Web to support the scalability and stability of networked, resource-oriented systems based on HTTP have greatly influenced the design concepts for REST APIs. The core principles are: Resource addressability, resource representations, uniform interface, statelessness, hypermedia as the engine state.

The use of GATT REST API and GAP REST API for Bluetooth low energy devices was introduced back in 2013. Since there they have been seen in several applications. GAP defines the general topology of the BLE network stack. GATT describes in detail how attributes (data) are transferred once devices have a dedicated connection. 

In this context, one of the main motivations of this solution was that Legacy data exchange technologies are not supported by modern SOA/microservices architectures. Moreover, the deployment of complex sensor networks, their management, diagnosis, decommissioning, and evolution of the sensor network, required specialised knowledge and tools. Hence to overcome this limitation, an onboarding tool was developed, of which GAP GATT REST API was a part. 

The Solution: How does it work?

The tool is implemented according to Bluetooth Special Interest Group (Bluetooth SIG) specifications (GAP REST API V10r01 and GATT REST API V10r01) and is a natural proxy/translator between BLE communication interfaces and REST API (used by Eclipse Arrowhead). 

  • After gateway receives a list of nodes from the backend and there’s an unpaired sensor, it sends the request to GGRA to connect to this node along with the payload (desired configuration)
  • When they are connected, the Bluetooth characteristics are being registered as REST services in Service Registry
  • The Authorization and Orchestration rules are dynamically set for the data provisioning service, and it’s orchestrated to the gateway’s main service
  • The data are passed to the backend for further processing, or are available on the gateway

Arrowhead Compatibility: To make the tool compatible with Arrowhead, a specially designed Attribute Table is used to differentiate measurements (with value characteristics) and metadata about the device. Both measurement service and metadata service are registered in Arrowhead Service Registry as soon as they are connected to the gateway (in UC11 there’s a tool responsible for automated connection with the BLE nodes). At the same time, the authorization and orchestration rules are created, and the gateway that requested the connection can consume the incoming data from the provider.

Arrowhead Compatibiliy.

To make the tool compatible with Arrowhead, a specially designed Attribute Table is used to differentiate measurements (with value characteristics) and metadata about the device. Both measurement service and metadata service are registered in Arrowhead Service Registry as soon as they are connected to the gateway (in UC11 there’s a tool responsible for automated connection with the BLE nodes). At the same time, the authorization and orchestration rules are created, and the gateway that requested the connection can consume the incoming data from the provider.

Estimate your project.

Just leave your email address and we’ll be in touch soon
ornament ornament

Case Studies.

The measurements are obtained using the onboard computer (the list of measurable variables is extracted from the setup description) and sent to the Ethereum blockchain (to oraclize the data). Oraclize is a service that allows smart contracts to access data from other blockchains and the Internet. The measured data and the hash of the blockchain transaction are communicated to the backend using IDSCP (Industrial Data Spaces Communication Protocol) via the IDS connector, where they were saved in MongoDB and visualised on the front end in real-time.

Streamlining Dairy Industry with MuuMap: A Digital Revolution

The Solution: How does it work?

The Data Generation Application (DGA) replicates the operation of a milk truck measuring system. The data is created in JSON format (for the time being) and is provided over IDS to an external consumer, who saves it in the database.

DGA also transmits data to the Distributed Ledger Technology (DLT) connection, which stores it on a Distributed Ledger and provides the transaction hash.


Simplified Communication.

The Arrowhead Framework is designed to act as a service discovery interface between the OBC application and the Digital Reference Platform. The sequence may look like this:

  • DRP adds an endpoint to the AHF Service Registry that listens for requests.
  • DGA registers in Service Registry as a consumer and searches for endpoints to which it may connect (in particular – the DRP endpoint)
  • DGA sends queries (one of which is preconfigured) on a regular basis to get the configuration of the nodes based on the kind of vehicle.
  • DRP responds with the requested configuration’s Schema.

DRP also includes an application for visualising in following possible configurations:

Full Configuration
No GPS Configuration
No Flow Sensor Configuration
No temperature sensor configuration
Only GPS configuration

Estimate your project.

Just leave your email address and we’ll be in touch soon
ornament ornament

Case Studies.

System Prototype Demonstrated in Operational Environment.

N-SAAW is a one-of-a-kind Deep Neural Network (DNN)-based system for monitoring farm animal health and wellbeing. This method was created specifically to examine the milk protein to fat ratio, encompassing extremes associated with malnutrition. This type of detection aids in lowering the danger of ketosis or acidosis induced by starvation. N-SAAW has already been deployed and evaluated in real-world agricultural settings, where it has been shown to identify malnutrition about 3.5 times sooner than current analytical approaches. Furthermore, because of the pre-processing, there was no possibility of the DNN missing starvation.

N-SAAW may be effortlessly linked with farm sensors and provides a way for autonomous monitoring to collect data that is then analyzed to provide insights to farmers. It also includes a visualization and reporting tool, allowing the provenance to be disclosed farther down the supply chain.

The Solution: How does it work?

The solution developed and demonstrated was a tool to support the monitoring and analysis of milk cows’ bio parameters (temperature, pH). The data on temperature and pH are gathered from a ruminal probe.

The tool’s core functionality is based on a Recurrent Neural Network trained to predict the possible health deterioration of a specimen. In case of a predicted threat, an alert is triggered to inform the farm manager about the malnutrition of an animal. The triggering value has been set to a pH = 5.8, sustaining over a period of a few hours. Figure 1 presents the tool’s user interface, showing measurements and predictions tables, alerts, and pH values over time.


Figure 1: The user interface of an analytical application for pH monitoring and prediction of the bio parameters related to the functioning of the digestive system
Figure 1: The user interface of an analytical application for pH monitoring and prediction of the bio parameters related to the functioning of the digestive system

DAC conducted an initial validation of the health deterioration (malnutrition) prediction model. The results of the validation are considered satisfactory. As shown in the confusion matrix below (Figure N), the accuracy of “bad” and “good” predictions is reliable, producing an insignificant number of false positives. The mismatch between “warning” and “good” can be reduced with an auxiliary analytical algorithm.

Figure 2: Confusion matrix of the Recurrent Neural Network (RNN) trained to predict possible health deterioration resulting from malnutrition.


  • health control based on pH and temperature from a ruminal sensor
  • fat-to-protein ratio control, based on milk parameters analysis, in order to detect improper nutrition

Estimate your project.

Just leave your email address and we’ll be in touch soon
ornament ornament

Case Studies.

System Prototype Demonstrated in Operational Environment.

The global streaming market is projected to reach USD 1.6 Trillion by 2029, and the Stream Processing Engine was developed at DAC.Digital is all set to be an integral part of this growth. It was built by our engineers in cooperation with the University of West Bohemia and ICPS (Lativa). The Stream Processing Engine is a platform based on Kafka. The applied lambda architecture enables scalable integration of multiple sensor data and supports real-time and batched data stream processing.

It has already been tested and deployed for stream processing (incoming data as they are measured) and batched data processing (a lot of data is incoming in batches, e.g., once a day). One of the successful deployments was in the dairy industry, where the incoming data from cows’ collars connected to ruminal probes and the data from milking robots were processed.

A more advanced example of using SPE is the practical implementation of Artificial Intelligence algorithms. Data from a topic can provide sets of data to the training algorithms. It might be implemented as a microservice, and by continuous processing of the incoming data, an improvement of the trained algorithm might be achieved. All of these operate and are coordinated through the appropriate implementation within a data stream.

State of the art.

The evolution of the network and sensor network technologies have enabled easy access to real-world information in real time. However, there is still a wide scope to push the envelope in terms of bridging the gap between the requirements of Industry 4.0 and the existing capabilities of stream processing as well as analytics. 

Since the beginning of the 20th century, data production has increased exponentially. In recent years there has also been a steep increase in streaming data, which has created a need for more efficient management and utilization of it. In the context of the Internet of Things (IoT), the multiplication of data stream sources (connected devices, sensor networks, etc) has been trending, especially in the cloud

Concept picture

Researchers have pointed out that although there are several frameworks for stream processing, there is a void regarding the available stream processing platforms/engines that businesses can readily utilize to take advantage of real-world, real-time big data. There is a high dependence on open-source stream processing platforms and documentation and source code available freely. To use this open-source solution, high technical expertise is required, which is scarce as well as expensive.

There is also a need for a solution such as SPE that would be able to integrate multiple streams. Having all electronic devices with their own software makes it impossible to integrate with devices from other companies. It results in a major disadvantage of managing many different software apps to display and process data.

The SPE was developed by, addresses this gap in the state-of-the-art and provides businesses with an easy-to-use emerging technology tool, which could boost the businesses’ operations and productivity.

The Solution: How does it work?

The SPE provides real-time data analytics based on Lambda Architecture, i.e. a generic, scalable, and fault-tolerant data processing architecture. This architecture is based on an append-only and immutable data source. Thus the serving layer is decoupled from data (events) storage and processing. Figure 1 shows the SPE within the AFarCloud platform Semantic Middleware (High-Level Services layer). 

Figure 1: Stream Processing Engine within the AFarCloud platform Semantic Middleware

The aim is to process the data inbound from third-party artifacts (data sources, software systems, and devices such as sensors) within real-time constraints. An example of processing functionalities is as under.

  • Pre-processing actions, e.g. filtering and cleaning the inbound data in order to reject all irrelevant or corrupted data.
  • Data aggregation, i.e. combining multiple data sources in order to prepare combined datasets for further processing (grouping data into topics, for instance).
  • Data analytics, e.g. calculating statistics or specific functions (for example, Product Environmental Footprint).
  • Checking business rules for triggering specific action, i.e. creating an alert or calling a specific function.
  • Publish-subscribe mechanism, i.e. AFarCloud stakeholders can provide data on specific topics in order to observe and be consumed by other participants.

SPE utilizes the Apache Kafka platform to implement a Data Broker, as shown in the figure 1 above. This is the core element that manages the inbound data. Kafka provides tools for managing real-time data pipelines and creating provider, consumer, and streaming applications.

Figure 2 presents SPE within the AFarCloud architecture. It is a part of the Data Management layer. The data is directly provided from the AFarCloud Interfaces layer via the Data Access Manager. Therefore there is no need for any additional middleware component (converter or adaptor) for stakeholders that publish the data within the AFarCloud ecosystem. In order to forward data to the SPE as well, the SPE Data Provider must be implemented.

Figure 2: Stream Processing Engine within the AFarCloud architecture

Key Features of Stream Processing Engine:

  • software component
  • integration “all in one” – all data aggregated and adequately processed in one place 
  • ease of introduction of new analytics
  • on-demand scalability 
  • suited for both real-time and batch data processing (thanks to the lambda architecture)

A comparison of the SPE with others publish/subscriber-based technologies is shown in the table below:

SPE based on KafkaRESTMQTT
Filtration and processing dataallowed on streameverything must be implemented from scratchno
Requests serviceflexible scalability, big throughputoverload by many requestssmall throughput
ComplexityA complete and standalone application. Use TCP binary protocol for communication.API exposing endpoints. Use HTTP protocol which is significantly slower than plain TCP binary.Lightweight transmission protocol. Optimized for sensor networks and M2M.
Persistence and reliabilityEnsures high reliability by adding the persistence layer and holding copies of streams.Everything must be implemented from scratch.No embedded persistence.

Who can make use of this technology?

Individual farmers who want their own tailor-made software with the possibility to further extend

Data analysts working in the agricultural field – for rapid development and tests of data processing algorithms in the same framework

Efficient Systems

SPE can be implemented by an integration/software enterprise as a core used for integrating a few separate agricultural apps into one software or system

Example Applications.

Use Case.

Farm dangerous events detection


Sensors that monitor the farm ecosystem (plantation or animal breeding) produce data that can be used in the SPE to detect dangerous events.


Stream Processor (see Figure 1) can be used to monitor the specific type of events originating from the farm ecosystem in order to check the defined business rules. In the case of fulfilling the condition, it triggers specific actions, for example, append an alert to the specific topic in the SPE stream. The alert can be handled by dedicated software, for example, a system that informs the end-user (farmer) about the dangerous situation or a system that manages the dedicated vehicle (e.g. drone) to start the mission.

Use Case.

Monitoring of cow breeding zone


Nowadays, modern farm is equipped with sensors that monitor the cow farming ecosystem. Data from these devices should be fused for further processing. This employs applications and algorithms that provide all necessary information about breeding. This approach enables detecting abnormalities, planning future breeding, calculating the costs of infrastructure and herd maintenance, etc.


Farm Management System layer (e.g. Decision Support System) or third-party software can consume data from the SPE in order to use it in the breeding domain algorithms and applications that monitor and manage the cow breeding zone.

Estimate your project.

Just leave your email address and we’ll be in touch soon
ornament ornament

Case Studies.

We are a team of engineers & problem solvers who deliver value across areas of IoT, hardware, embedded systems, big data, machine learning, DLT, DevOps, and software engineering.

Our Client

  • Name: MuuMap
  • Line of business: Hardware for the dairying industry
  • Country: Poland




Previously, confirmation of the number of litres of milk collected from the producer was entered manually on an External Receipt (PZ) printout. After completing the milk collection route, the drivers would deliver the forms to the Purchasing Department, which had to manually enter more than 2,000 collections per day into the accounting system, and errors arising in the process were verified manually.


The solution to this problem is the D_Box, a device that automatically records information on the quantity of milk transported and sends it to a cloud platform. The D_Box is a telematic control unit fully integrated into the transport vehicles. Through the D_Box, data from sensors placed on the transport vehicles (such as the milk quantity counter, its temperature or the vehicle’s location) are collected in real time and sent to the cloud platform.

How did it all begin?

The goal

The aim of the project was to complement the IT system on offer with the devices necessary for the digitalisation of the dairy industry. Mobile IoT devices were implemented to monitor in real time the parameters of the goods being transported and the condition of the vehicles themselves, mounted in the transport vehicles. The D_Box enables the automation of data reporting processes in the milk haulage process.

First version

The first versions of the device were installed in around 100 vehicles monitored by our IT system, enabling daily reporting of 2 million litres of milk from 2,000 suppliers. The information gathered allowed us to develop and launch in 2020 an industrial-grade device with an open-source operating system (allowing the system to be self-optimised to its own needs), based on the latest technology, as a response to the needs of the dairy market.

What is behind the D-Box?

D_Box is a telematics control unit fully integrated with transport vehicles. Through D_Box, data from sensors placed in transport vehicles (such as milk quantity counter, milk temperature or vehicle location) are collected in real time and sent to the cloud platform. Our solution allows you to automate the entire milk collection process. Basing the system on reporting the status and location of transported goods in real time translates into better decision-making, reduced documentation and a significant reduction in errors and abuses. The provided data allows for reducing the number of errors in the recorded number of liters of milk provided by farmers, eliminating the possibility of fraud and forgery during milk collection, automatic control of the entered information and simultaneous reduction of manual data entry, as well as the possibility of dynamically changing the route depending on the needs.

Thanks to the use of a microprocessor in the ARM architecture, D_Box has an operating system that allows its operation without the need to have competences in programming microcontrollers (C language), and the configuration of the device and the creation of your own scripts is done from the operating system level. All input/output terminal elements have drivers, which facilitates their operation in a way similar to communication with a printer or keyboard. Integration with the 4G network and the use of Bluetooth Low Energy (BLE) technology allowed the creation of a fully mobile device intended for transport vehicles. The project uses the latest GSM communication technology. The module works in LTE-M technology, which is dedicated to IoT solutions, including mobile ones, where it is able to maintain the appropriate quality of data transmission even in the case of poor network coverage. D_Box enables the transmission of information from sensors in real time, and thanks to the use of GPS technology, it is possible not only to monitor the parameters of the transported goods, but also their location. Thanks to an extensive communication system based on BLE and Wi-Fi, the devices can be easily configured to work as a master communication device, serving as a medium for data transmission between sensors and the system. Sensors installed in vehicles send data to D_box, and then they go to our IT system. Integration with the Hardware Security Module ensures the highest level of security of the data transmission services provided.

The main advantages of D-Box

Plug and play approach to IIoT Data Collection
D_Box is an electronic multifunction device for prototyping, MVP development & pilots in Industrial Internet of Things applications. Wireless connection, many interfaces, flexible design.

Single board computer with many applications.
The device is a single-board computer with an ARM processor with a Linux operating system distribution and the peripherals are operated using appropriate commands.

Limitless Implementation
The device can be installed in any place, such as a production hall, farm, warehouse, construction site and many others. It is equipped in GSM module for wireless communication so it is suitable for mobile applications on trucks, construction vehicles, ships etc.

Remote Access
The device receives data from sensors or other devices and sends it using the GSM network to supervisory systems. It provides remote access to data from devices installed onsite. D_Box enables over-the-air updates and remote configuration of sensors.

Validated Technology
We have implemented D_Box has into a popular IT system for the dairy industry. It allowed automating the entire milk collection process, eliminating manual data entry about amounts of milk provided by farmers, transport optimization, and monitoring.

Hardware for special tasks
D_Box enables wireless communication via GSM, BLE and Wi-Fi networks and wired communication – CAN bus, USB, Ethernet, I2C. Selection of external connectors, customized mounting options, custom branding and many more options are available to meet your needs.

The results of the dairy revolution

With the introduction of D_Box, the company has complemented the range of its own IT products with a multi-purpose hardware solution that provides the necessary real-time data for the entire milk collection process and allows it to be automated. By integrating our software system with a dedicated hardware solution, we were able to offer the dairy industry a comprehensive solution to meet its needs.

Due to its versatility, we also dedicate the device to a broader spectrum of companies related to the IoT industry or companies looking to develop their product. The D_Box provides the professional, multi-tasking hardware layer necessary for this type of project. It is the only such solution dedicated to IT companies that they can use during their own IoT projects. The company plans to attract partners interested in purchasing the devices for their own proprietary solutions as a finished hardware component.

Implementation of D-Box

400+ devices installed

8000+ reports daily with milk quantities, temperature, etc.

500+ routes served daily

Estimate your project!

Let’s revolutionize your customer experience together. Get in touch today!
ornament ornament