Click-Ins: Co-Inventor Dmitry Geyzersky Discusses Using AI to Analyze Automotive Accidents
Click-Ins is a startup company that uses AI to assess and inspect vehicle damage for insurers, rental car companies, and other stakeholders in the automotive sector. Co-created by tech entrepreneur and solutions architect, Dmitry Geyzersky, Click-Ins also employs an assortment of other disciplines — such as photogrammetry and 3D modeling — and a patented hybrid technology called DamagePrint™. This technology is designed to be user friendly and enable anyone with a Smartphone to generate a highly accurate damage assessment in less than a minute.
Dmitry recently discussed his technology and company via an exclusive interview.
Meagan Meehan (MM): How did you get interested in technology and why did you gravitate towards a car-focused program?
Dmitry Geyzersky (DG): As a hands-on software architect, I’ve always been fascinated by the mechanics behind complex systems. If you enjoy taking apart watches or solving intricate puzzles, chances are you’re naturally curious about how things work beneath the surface — and that’s exactly what drew me to technology. In my early career as a software architect and senior technology consultant, I was often called on to build working prototypes and deliver high-end, often mission-critical solutions for both public and private sector clients. Some of the intelligence projects I worked on demanded not only a deep understanding of technology but also a fair amount of creative problem-solving. In 2014, Eugene Greenberg and I co-founded Click-Ins with the goal of detecting and preventing insurance fraud. Our original ambition was to build what we half-jokingly called the “Palantir of insurance.” The automotive sector, especially when combined with fraud detection, offered a uniquely rich and complex environment. It became the perfect proving ground for applying advanced visual intelligence — where precision, speed, and trust are absolutely essential, but where go-to-market timelines are often more manageable. That combination made a car-focused program not only attractive but strategically obvious.
MM: When did it dawn on you that AI could be used in this way?
DG: It wasn’t a single “eureka” moment, but rather a gradual realization that came through hands-on experimentation. In the early stages, we relied heavily on traditional computer vision and photogrammetry techniques. While they were valuable, we quickly encountered limitations — especially when it came to tasks like object classification, detection, and instance segmentation. The algorithms we were using weren’t stable or consistent enough to perform reliably in the real world.
That’s when deep learning really proved its worth. These models helped us overcome critical bottlenecks and brought a new level of robustness and adaptability to our system. It became clear that if we wanted to scale our solution, automate complex visual tasks, and deliver accurate results from something as simple as a smartphone photo, AI — and specifically deep learning — was the key. That shift in approach marked a turning point for us. Many solutions relied on costly infrastructure — scanning tunnels, specialized cameras, or human work. We recognized that AI, and more specifically deep learning and visual intelligence, could radically simplify and democratize this process using just a smartphone. That was the breakthrough moment: understanding that intelligence in software could replace complexity in hardware.
MM: How did you develop this technology and how long did it take to perfect?
DG: The development journey has spanned nearly a decade. We initially set out to build a comprehensive intelligence platform to fight insurance fraud. It was only after careful iteration — and with the guidance of our investors — that we focused on one vertical: automated vehicle inspection using images. We developed a proprietary system from the ground up, relying on neural networks, computer vision, and a unique visual intelligence framework. Perfection, of course, is a moving target in technology, but what sets us apart is that we’ve achieved high scalability and production-grade reliability on mobile devices, without the need for hardware dependencies.
MM: What do you personally believe is the most impressive aspect of the Click-Ins system?
DG: What I find most impressive is how we’ve managed to bring together high accuracy, usability, and scalability — without relying on any specialized hardware. A user can take a few guided photos on a regular smartphone, and our system automatically detects, segments, and reports even subtle damages with remarkable precision. But what really gives us a technological edge is our unique approach to synthetic data. We developed proprietary methods to generate realistic, annotated images at scale, simulating a wide range of vehicle types, lighting conditions, damage scenarios, and environments. This has been a game-changer — allowing us to train and fine-tune our models with exceptional control and consistency, even for edge cases that are rare in the real world. By combining synthetic data with deep learning, we’ve achieved a level of reliability and domain adaptation that sets us apart. What takes this even further is our patented technology — DamagePrint™ — which enables us to uniquely identify and match damage across different scenes, camera types, and viewing angles. This fusion of AI precision, scalable training data, and proprietary visual intelligence is, in my view, the most powerful and differentiating aspect of what we’ve built.
MM: What have been some of the reactions you’ve gotten from automotive and insurance professionals regarding this creation?
DG: Reactions have been very encouraging. Automotive players, especially in the U.S., appreciate how the system supports real-time, high-volume vehicle transactions — whether it’s trade-ins, auctions, or fleet inspections. Insurance companies see our technology as a game-changer that can reduce fraud, accelerate claim resolution, and minimize operational costs. Perhaps the most telling feedback is when global manufacturers and even military bodies approach us — not because we’re the biggest, but because they trust that we can solve problems the larger players can’t.
MM: What other kinds of technology have you invented?
DG: Over the years, I’ve had the opportunity to work across a wide range of technological domains beyond automotive AI. One of my earlier innovations was a high-performance computing algorithm designed to simulate public transportation networks at scale — a solution that helped model urban mobility patterns with exceptional precision. I also developed a semantic search engine as an alternative to Google, built around a fundamentally different ranking algorithm focused on contextual meaning rather than keyword matching. In parallel, I’ve led the architecture and design of large-scale government systems, where stability, scalability, and security were paramount. And I’ve worked with numerous innovative startups, helping them bring complex technical ideas to life — whether through deep tech prototypes, product-ready platforms, or custom software infrastructure. These experiences have given me a broad and practical understanding of how to build intelligent systems that are not just theoretically interesting, but actually work in the real world.
MM: What other areas might you turn your invention attentions towards in the near future?
DG: Our core technology is highly adaptable. While we’ve focused on vehicles, we see strong potential in adjacent domains — marine vessels, aircraft, industrial machinery — any asset where visual inspection is key. Our system is designed to scale beyond cars, and we’re already exploring several of these directions. We’ve even had commercial engagements with the homeland security entities, which validates the broader applicability of our approach.
MM: What are your ultimate goals for the future of Click-Ins and is there anything else that you would like to mention?
DG: Our vision is to become the de facto standard for AI-powered visual inspection globally. We want to empower every player in the automotive and insurance ecosystems — regardless of size — to access fast, accurate, and transparent vehicle assessments. Ultimately, it’s not just about cars or claims. It’s about redefining trust in visual data. What I’d like to add is that we’re proud to be part of the deep tech movement. We bring not only technology but also values rooted in integrity, transparency, and resilience. And that, I believe, is just as important as the algorithms behind it.