Europe Union

Outline of the problem

The objective of the competition was to identify new biomarkers that contribute to the progression of Parkinson’s disease. Experts believe that alterations in protein and peptide levels signal further development of the disorder. By forecasting future MDS-UPDRS scores for patients, we could devise more effective treatments for each unique case. Since there is no known cure for Parkinson’s disease, we must understand all phases of its development in order to ease unpleasant symptoms.

Proposed solution

After carefully analyzing the problem, the team came up with the innovative idea of developing an ML model that could predict MDS-UPDRS scores for each individual patient, personalized to their level of protein and peptide.

The solution required a deep dive into the data to uncover hidden patterns and relationships. These patterns were then used to train machine learning algorithms that could predict the progression of Parkinson’s disease in months interval.

Technology overlay

To improve our understanding of Parkinson’s disease and develop new treatments, we used a variety of technologies, including:

  • Python for scripting and experiment architecture
  • Pandas for tabular data handling and EDA (Exploratory Data Analysis)
  • CatBoost and XGBoost to build decision trees  for classification tasks
  • Optuna for hyperparameter optimization
  • TabNet (DNN) for finding patterns in tabular data

Monitoring and Predicting Parkinson’s Disease

1
2
3

Doctors currently monitor the progression of Parkinson’s disease (PD) by taking a medical history, performing a physical examination, ordering tests, using rating scales, and tracking the person’s progress over time. However, there is no way to predict with certainty how quickly PD will progress in each individual.

Now, DAC.digital researchers among many others are developing a system to help predict the development of PD using machine learning (ML) algorithms and models. This system could be used to identify people who are at risk of developing PD and to provide them with early treatment. Early treatment could help to slow the progression of the disease and improve the quality of life for people with PD.

The development of this system is a promising step towards improving the diagnosis and treatment of PD. By better understanding the factors that contribute to PD progression, researchers can develop more effective treatments and improve the lives of people with this disease.

Initial talks and kickoff

A team of researchers was tasked with analyzing tabular data of over 10,000  subjects, including patients’ peptides/proteins levels (taken from Cerebrospinal Fluid sample) and past Unified Parkinson’s Disease Rating Scale (UPDRS) score with additional clinical state on medication. Data was collected during cyclic visits and often included incomplete information.

The team’s goal was to identify any hidden relationships between the data that could help them better understand the progression of PD. To do this, they had to:

  • Normalize and perform the imputation of missing  data to ensure that all of the subjects had complete records with values on the same scale.
  • Perform statistical operations on the data, such as calculating means, standard deviations, and correlations.
  • Research correlations between different variables to identify any potential relationships.
  • Deep research methods responsible for grading the importance of selected variables and predicting PD progression.
  • Show the progress of the disease with graphs to visualize the changes in PD symptoms over time  and evaluate the computational pipeline on local cross-validation.

Experts’ work was successful in identifying that the relations between given peptides and protein data is not sufficient to accurately forecast the PD progression.. That was due to incompleteness of provided data and incorrect incorporation of control group to the data set. Stronger signal was found in the visits frequency, although it was not explored by DAC experts.

Team composition

The team  was self-organized and efficient. When new tasks came in, the person who was interested in the topic was responsible for that part. This allowed the team to be flexible and adaptable to changing priorities.

While the team was eager to deepen their current skills in data analysis, especially in processing tabular data. They had to master new methods and techniques, and they often had to troubleshoot problems. It was a tough nut to crack, but the team persevered and eventually gained a deep understanding of tabular data processing.

The team’s hard work paid off. DAC engineers were able to successfully process the tabular data and gain valuable insights. This knowledge will be essential for future projects. Team is confident that they can handle any challenge that comes their way.

Results

Our team developed a working machine learning (ML) model that can predict the progression of Parkinson’s disease (PD). We started with raw data and imputed missing records required for accurate predictions… Then, we trained ML algorithms on found relations to make predictions about the PD progression in set time intervals. Our computational pipeline was able to make correct predictions, and we are confident that it can be used to improve the diagnosis and treatment of PD.

Conclusion

This project was a valuable learning experience that allowed us to apply our knowledge of data analysis and machine learning to a real-world problem. We are proud to have made a positive impact on the lives of people with Parkinson’s disease.

Estimate your project.

Just leave your email address and we’ll be in touch soon
ornament ornament

Case Studies.

We are a team of engineers & problem solvers who deliver value across areas of IoT, hardware, embedded systems, big data, machine learning, DLT, DevOps, and software engineering.

Our client

The company is a startup founded by veterinary experts who wanted to create smart solutions for animal healthcare.

They aimed to create wearables and complete products dedicated to professional veterinarian caregivers like animal clinics, hospitals, etc.

Challange

We were tasked with taking care of the project as an interim product owner. We needed to create a viable product that could be tested. It involved both software and hardware aspects.

We were initially tasked with creating a machine algorithm for the device. After conducting discovery workshops, our team proposed a new roadmap for the product.

Solution

The team of experts proposed a deep-learning approach for the initial classification and assessment of the severity of the illness. The data could be later extracted and used to monitor the disease further. 

After identifying the necessary steps to clean the provided data, we employed deep learning models and a deep convolutional neural network based on raw data for pattern recognition. 

Applied technologies:

  • Discovery and analysis
  • Product roadmap
  • Data analysis
  • Deep neural network
  • AI modelling
  • Hardware advisory
How to distinguish OA disease among dogs using AI

Veterinary experts with a mission and need for technology assistance

 

team_of_experts

The startup was founded by veterinary experts with a plan to support professionals and veterinary clinics with technological solutions. They planned to develop an end-to-end solution that would help diagnose osteoarthritis disease among dogs.

prototyping_phase

The device’s design had to be compact and light enough to attach to the dog’s collar easily. It would then record the movements while walking or running to pinpoint indicators that could designate potential symptoms of OA, alerting the owner to look for further diagnosis. With the vast domain knowledge, our client needed technology experts to complete the product’s technical part.

The first significant challenge involved a tight budget and time. The company has already tried cooperating with other tech companies. However, the results could have been better. Thus, a lot of resources have been wasted. They needed a partner to help progress with the product to be viable enough for testing if it could work and be accurate enough to invest in developing the final product.

Check out the article on top tech trends in the coming years.

See the trends

Cleaning the data for reliable results

Before developing the algorithm, our team (also comprising PhD-level specialists) needed to evaluate the available data collected. Upon examining it, our team noticed that the data wasn’t correctly annotated. There was also an issue with the logistic regression model and overall technical trouble with computer-aided detection and data collection from the gyroscope. 

new web platform

We’ve decided to employ deep learning models and deep convolutional neural networks for imaging data to recognise patterns based on collected raw data. We wanted to prove the universality of solutions of the deep learning model for different breeds. At first, we relied on the previously collected training data provided to us by data scientists.

Our solution involved the development of the right processing artificial intelligence algorithms and AI models of the data from accelerometers and gyroscopes, then creating classification algorithms that operate on the collected data sets and adjusting the software/hardware interfacing to obtain stable results.

Machine learning as the optimal approach to success

1
2
3

Machine learning and artificial intelligence were at the heart of the project. Our task was to conduct data analysis based on the provided data sets and input data, with the help of computing resources, artificial intelligence and machine learning model, with measurements of dog movements and to distinguish between dogs with and without OA based on the collected sensor’s raw data.

Based on the training data provided by data scientists, we started by monitoring dogs’ activity over a long period due to the importance of high precision of deep machine learning and digital imaging. After analysing different data points and input and output variables, we aimed to introduce significant improvements to the solutions regarding logistic regression models, machine learning and AI models. We sought discriminative features and data points crucial for determining dogs’ illnesses. Based on the results, using analytical methods, we were able to determine whether the dog had OA.

Our team proposed a deep neural network machine learning algorithm to increase the level of classified OA severity and the diagnostic accuracy of the device and AI models. The results of our efforts allowed us to sort through all the data and extract the deep features. We implemented artificial intelligence solutions and deep neural networks to distinguish between dogs with OA symptoms. We proposed a high-quality deep-learning model and other deep-learning technologies to generate results that could work with more data in the future.

The milestones of the process

  • Discovery and analysis meetings with the client. It helped us understand their context and ultimate goals.
  • Auditing the work and results of the previous partners.
  • Assessing the feasibility of the product.
  • Planning the roadmap of the process and development
  • Proposing a methodology for creating a viable product
  • Developing and verifying the algorithm
  • Drafting the blueprints for large-scale testing

Critical process outtakes:

  • Cleaning the provided data to make work on an optimal solution easier
  • Proving the product feasibility to ensure good prospects for further development
  • Outlining a solid methodology to effectively boost TRL

Results

Our experts and the client were happy with the results achieved during our cooperation. We were able to extract all the necessary data and implement further measures. The speed of the delivery and the quality of provided solutions left the product in a state that allowed for further testing and development of its final stages. Our team built a solid and reliable foundation for the company to proceed with the development upon receiving funds.

Important figures

  • 180h – the time it took to complete the requested part of the project (as opposed to the estimated 240h)
  • 80% – the average level of accuracy the model displayed

 

Estimate your project.

Just leave your email address and we’ll be in touch soon
ornament ornament

About the client

  • Name: Swiftly 
  • Line of business: Automated Recruiting and Unbiased Recruitment tools
  • Founding year: 2020
  • Country: Sweden

Problem overview

Swiftly, a Stockholm-based startup, grappled with two significant challenges within their job portal. Firstly, accurate categorization of job listings posed difficulties, leading to suboptimal user experiences and ineffective job matching. Secondly, the manual job application process was time-consuming and resource-intensive, restricting scalability.

Proposed solution

Our approach comprised two pivotal components:

  • Web Scraping Tool: We developed a sophisticated web scraping tool to extract precise keywords from job listings, enhancing categorization accuracy.
  • SOTA Presentation: We created a visionary state-of-the-art (SOTA) presentation, demonstrating automated field auto-fill capabilities to streamline the application process.

Applied technologies:

  • Python, Selenium and FastAPI were used to implement a service able to scrap form fields from a given website, and to fill automatically the forms once the data are provided
  • Neo4J and PostgreSQL were databases used for storing graph data describing relations between job offers, job seekers and other data which can be used to look for mutual associations, as well as more general and structured metadata of job offers.
  • Sklearn was used to implement a recommendation engine looking for best matches between job seekers and job offers.
backend_developer_skills_cover

Pre-existing Challenges:  

Before implementing the SOTA and POC solutions, Swiftly faced several challenges:
  • Inaccurate Categorization: Swiftly encountered difficulties in accurately categorizing job listings, causing mismatched job offers and candidates.
  • Manual Application Process: Manual application processes consumed time and resources, impeding scalability.
  • Insufficient Automation: The absence of automated keyword extraction led to imprecise job listing categorization.
  • Scaling Issues: Manual processes and categorization limitations hindered scalability.
  • Lacking Technological Strategy: Swiftly lacked a comprehensive technology-based strategy to enhance categorization accuracy and streamline processes.

Implementation Approach.

Our implementation strategy followed these steps:
1
2
3
Initial Talks and Kickoff

Collaborative discussions between Swiftly’s leadership and DAC.digital’s technical team laid the groundwork for a productive partnership, aligning expectations and goals.

Team Composition

An 8-person team, comprising ML Engineers, Embedded Systems Engineers, Data Scientists, and Fullstack Developers, came together to tackle the project.

Agile Collaboration

Daily stand-up meetings and ongoing communication facilitated iterative development and enhancements.

Results and Impact:

The project concluded with the creation of an advanced SOTA solution that effectively addressed Swiftly’s challenges. This solution improved job listing categorization precision and streamlined the application process. The SOTA also offered a proof-of-concept for refining job listing keywords and automating application field population.

Results

Swiftly’s collaboration with DAC.digital resulted in the successful resolution of their job portal challenges through the implementation of innovative automation solutions. The web scraping tool and the SOTA presentation highlighted the potential of technology to enhance processes, elevate user experiences, and pave the way for future enhancements.

 

Key numbers

  • Project Duration: Successfully completed within 16 days!

Estimate your project!

Let’s revolutionize your customer experience together. Get in touch today!
ornament ornament

Check other case studies:

Outline of the problem

The key issue is to reconstruct 3D objects and buildings from unstructured image collections freely available on the internet. The challenge is to identify which parts of two images capture the same physical points of a scene, establish correspondences between pixel coordinates of image locations, and recover the 3D location of points by triangulation. 

Proposed solution

The proposed solution is to develop a machine learning algorithm based on computer vision techniques to register two images from different viewpoints. By creating a method to identify key points in the images and establish correspondences between them, we can calculate the fundamental matrix, which provides essential information about where and from which viewpoints the photos were taken. This process will lead to the generation of 3D models of the landmarks.

Which technologies have we applied:

  • Python,
  • Pytorch,
  • Kornia,
  • OpenCV
satellite_data_collection

Connecting Perspectives: Algorithmic Solutions for Cross-View Landmark Recognition in Tourist Imagery.

There were collections of tourist images of 16 landmarks taken from various angles and distances, such as nearby, below, and sometimes with obstructions like people. The challenge was to develop algorithms capable of identifying key points in these images (located on buildings) and then establish the correspondences between them across different viewpoints, even without knowing the exact camera parameters used to capture the images. The difficulty lies in dealing with diverse viewpoints, lighting conditions, occlusions, and user-applied filters in the images, without having access to capture location and device parameters like camera models and lenses.

 

The people and tech behind our project.  

The team consisted of five developers and researchers with varying levels of experience in computer vision, machine learning, and image processing. 

Each member was responsible for working independently in their niche area and performing experiments while also collaborating and discussing progress with others to finally integrate best approaches in one system.

 

The project leveraged Python for scripting and building experiment architecture. PyTorch was used to build and train neural networks for keypoint detection and matching, while Kornia provided state-of-the-art models for Computer Vision. OpenCV handled image preprocessing and image manipulation tasks. This cohesive tech stack enabled efficient experimentation and remarkable progress in 3D object reconstruction from diverse image collections.

A Holistic Journey through Landmark Recognition: From Exploration to Validation.

1
2
3
4
5
Unveiling Insights Through Literature Exploration

First, we delved into an in-depth literature review to gain a comprehensive understanding of existing solutions and techniques in the realm of stereophotogrammetry and 3D reconstruction from images. This initial phase allowed us to grasp the state-of-the-art approaches and identify potential areas for improvement.

Navigating Algorithms and Models for Key Point Identification

Next, we proceeded with experimentation, exploring various computer vision algorithms and machine learning models. Our primary aim was to identify key points within the images and establish meaningful correspondences across different viewpoints. This experimental stage enabled us to assess the performance and limitations of different approaches, guiding us towards the most promising paths.

From Theory to Reality: Prototyping and Refinement

With valuable insights from the experimentation phase, we moved on to developing prototypes. These prototypes served as crucial testing grounds for implementing diverse algorithms and fine-tuning parameter combinations on our dataset. Through this iterative process, we gained valuable feedback and refined our methods.

Forging Cohesion: Seamlessly Merging Algorithms and Techniques

As the project’s complexity demanded an integration of various algorithms and techniques, we dedicated substantial effort to ensuring a seamless fusion of components. This integration phase required meticulous coordination and harmonization of different modules to ensure they functioned cohesively.

Testing the Waters: Evaluating Performance and Potential

Finally, we put our solution to the test. Through extensive testing on unseen data, we rigorously evaluated its performance, assessing its accuracy and generalizability. This thorough examination allowed us to validate the effectiveness of our approach and ascertain its potential for real-world applications.

Results

The team’s developed machine learning algorithm successfully registered images from different viewpoints and calculated the fundamental matrix. This allowed them to create accurate 3D models of the landmarks from the collections of tourist images.

Key numbers

The project achieved success in solving the complex computer vision problem in a relatively short time frame of slightly over one month.

quote icon

The proposed solution showed promising results and had potential applications in virtual and augmented reality, cultural heritage preservation, and others 3D reconstructions where the data is not complete. quote icon

Michał Affek
Michał Affek Embedded Machine Learning Researcher

Pushing Limits in Computer Vision: Join Our Journey of Innovation.

Computer vision topics can be both challenging and innovative, as demonstrated by DAC.digital’s remarkable research-science project in reconstructing 3D objects and buildings from images. If you are interested in embarking on a project that involves computer vision and pushing the boundaries of this cutting-edge technology, we invite you to contact us to collaborate and work together!

Let’s join forces to unlock new possibilities in the world of computer vision.

Estimate your project!

Let’s revolutionize your customer experience together. Get in touch today!
ornament ornament

Check other case studies:

Customer.

Sports Computing
Sports Computing combines the best of both worlds – a high-tech app based on AI with motion tracking and football. Changing the way we train, stay active and share our love of the sport, Sports Computing lets you share your love of football no matter where in the world you are. KickerAce – All you need is your phone and a ball.
Sports Computing
Experience we shared.
Computer vision processing Computer vision processing
Artificial Intelligence & Machine Learning
Mobile application development Mobile application development

Problem.

  • Need to promptly deliver a revamped version of the app based on a new UI design.
  • The software was expected to facilitate a large number of concurrent users, which required full scalability.
  • Lack of internal tech resources on the client’s end.
  • Looking for a team with competencies across a broad spectrum of skills – including mobile development, backend, video and image processing, AI/ML, and the ability to package all these skills together.
  • Previously choosing a partner that failed to deliver expected results and caused a go-to-market delay. 
  • Unmaintainable, messy code with no versioning scheme.

Solution.

  • Initially, performing detective work to find the most recent version of the app, fixed all burning issues, and deployed the app again to the testers to create a baseline.
  • Cleaning up the code and redesigning the application based on the new designs.
  • Bringing the backend in order based on established good practices – decoupling environment, creating a separate development and production infrastructure, setting up proper DevOps infrastructure in Azure context as well as setting up the CI/CD pipelines for mobile app
  • Setting up a dedicated team tackling the image analysis aspects of the app.
  • Developing the product in line with the Sport’s Computing Product Owner cooperation

Process.

The services are performed by DAC.digital developers chosen to form an interdisciplinary, independent team. The core areas of support were based on Data Science with Python and Image Analysis knowledge and experience and DevOps support and were aligned during the so-called “Block Planning Sessions” or prioritized and assigned to our team via email. The initial collaboration began with KickerAce mobile app development and further collaboration on Shot Analyzer software.

Delivered value.

The customer has been provided with fully scalable and functional software, meeting the deadlines, requirements, and specifications presented towards the beginning of the project. The collaboration between DAC.digital and customers’ teams has been based on transparency, openness, and honesty resulting in solid trust. Our problem-solving approach and excellent understanding of both technology and business allowed the Sports Computing team to feel comfortable and confident in the results of our work.

Testimonial.

Review Quote
Most important is that you cover our professional needs, which are pretty extensive and different from traditional projects. We couldn’t get a more ideal partner with extraordinary skills both within AI and application development. Professional and transparent project management is vital. PM and interactions are working exceptionally well. Your ability to work independently and come up with constructive alternative solutions, understandable for a layperson, has reduced the stress and concerns. We appreciate the good chemistry. We see DAC.digital as more than just another developer. We see you as an extension of Sports Computing.
Kjell Heen
CEO of Sports Computing

Used Technologies.

React Native
Azure
Terraform

Are you interested in video processing and mobile app development?

Just leave your email address and we’ll be in touch soon
ornament ornament

Advantages of IREENE system.

IREENE adds significant capability to product knowledge management in modern manufacturing enterprises.

Topic modelling and semantic representation of existing documents’ knowledge graphs might minimise the time necessary for the manual processing of essential documentation by individuals involved in product management across several organisational verticals.

It has already achieved great success in offering insights for product management, including but not limited to operations, compliance, R&D, and intellectual property rights.

Stack of papers

State of the art.

Every manufacturing organisation must deal with a substantial volume of external documents. Intellectual property and fundamental standard compliance must be studied and analysed before development. Every day, patents (including Standard Essential Patents), technological standards, and scientific papers are searched across all sectors.

However, the amount of relevant textual materials available is huge. Over 3.4 million patent applications were filed globally in 2021, with the number growing by 5-9% each year since 2011. In addition, the average word count in patent applications has increased throughout the 1990s, surpassing 7,000 in 2007. An average reader would need 200 years to read (nonstop) 3.4 million patent applications without titles, abstracts, or references.

Mobile app development process

A quick glance at the most prominent standards bodies demonstrates the breadth of accessible sources. There are 22,538 ISO standards, for example, and over 1,300 IEEE standards. With the rising digitisation of the sector and current technical advancement, we anticipate that numbers will rise. Over 50 million scientific articles have been published by 2010, and the overall quantity of scholarly papers is doubling every nine years.

Patent information is used in a variety of contemporary organisations, including strategic management as a foundation for competitive environment monitoring, technology assessment, or even R&I portfolio management, design and engineering, to name state-of-the-art research, and legal when functionality, design, and implementation technique are studied in the context of the so-called “Freedom to Operate” analysis to determine whether the development and marketing of a product is permissible.

Man writing an article

IREENE (Information Retrieval Engine) is a solution to this need of providing methods of processing unstructured text documents in order to create a knowledge graph representing the contents of available sources.

The Solution: How does it work?

In the case of the digital industry, data-driven engineering and manufacturing refer not only to machine-generated data fed through IIoT but also to the vast accumulation of unstructured data, including textual content written in natural languages. The volume of available data is even bigger as virtual organisations build on the free flow of information and knowledge between direct partners and third parties.

Design, engineering, manufacturing, and other processes of industrial enterprises are deeply embedded in textual data, usually human-generated content such as patent files, scientific publications or industrial standards like IEEE or IEC. In order to embrace both the volume and potential of pertinent but heterogeneous data, it is necessary to make it machine-readable first. This is where IREENE comes in.

Input files to IREENE could include a wide range of inputs such as patents, user requirements sheets, customer feedback, troubleshooting descriptions, failure, and fault reports, insights from previous projects, regulatory considerations, engineering standards such as those defined by ISO, IEEE, or IEC, not to mention product-relevant scientific publications.

IREENE processes input files of different formats (e.g. text documents, spreadsheets, presentations) in order to create a knowledge graph representing the contents of processed sources. The data sets used in the development have been subjected to topic modeling, which as an unsupervised machine-learning technique to detect similarities between documents and cluster expressions that statistically define the contents of a document in the most accurate manner.

IREENE uses a topical model to enable functionalities of (a) smart semantic search and (b) visual knowledge-graph browsers. Smart semantic search and visual knowledge-graph browser are the enablers to apply the Business Platform for Distributed and Decentralized Data Exchange Ecosystems not only to the traceability use case but for Electronics and ICT as an enabler for the digital industry and optimised supply chain management covering the entire product lifecycle in large ecosystems.

The ambition is to analyse documents and find similarities in a way that search engines like Google are possible in a B2B environment and, by that enabling a Product Life Cycle Management.

Case Studies.

We are a team of engineers & problem solvers who deliver value across areas of IoT, hardware, embedded systems, big data, machine learning, DLT, DevOps, and software engineering.

Estimate your project.

Just leave your email address and we’ll be in touch soon
ornament ornament