Leveraging TensorFlow.js in Medical Imaging

Guest Blog Author: Dr. Erwin John T. Carpio

Leveraging TensorFlow.js in Medical Imaging

As a physician and radiologist, I have always wanted to learn and develop machine learning models and apply them to my field. However, machine learning felt like a foreign language to me, and with my limited programming experience and non-computer science background, I thought it was challenging to step into this area.

However, shortly after, I changed my mind. As I delved deeper into the learning journey, I found that this field was much simpler than I initially imagined.

RadLens: An Application for Reverse Image Search

Currently, I am focused on launching a tool called RadLens to inspire practitioners with similar backgrounds to mine. I hope that with the help of this tool, my peers can start thinking about how to use machine learning tools to assist in their daily work. This work is still ongoing (it has not yet become an FDA-approved medical device and is therefore not available for diagnosis).

For machine learning applications in healthcare, the most important task is to collect large amounts of diverse training datasets and perform rigorous evaluations. For clinical-grade applications, it may require collecting thousands or even more professional-grade images. Therefore, the application I want to develop focuses on building a small tool for validating ideas. This tool can provide some convenience for my daily work, but of course, I remain the physician ultimately responsible for making accurate diagnoses.

My daily work includes identifying and diagnosing fractures, so my initial idea was to see if I could build an application that classifies types of fractures. In my first minimum viable product (MVP), I focused on two types of fractures of the forearm (Monteggia fracture and Galeazzi fracture).

In addition to diagnosing fracture conditions, radiologists also need to undergo various anatomical training to understand the normal forms of the human body to distinguish them from actual pathological lesions. The accessory bones of the foot are an example of anatomical variation. Radiologists need to train to understand the differences between normal anatomical forms and actual fracture fragments.

Sometimes, we look up anatomical normal forms in reference books to ensure that what we observe is just an accessory bone and not actual fracture fragments. There are many accessory bones in the foot, so manually searching for corresponding content in textbooks from memory can be very cumbersome. Therefore, I decided to try training a new ML model for version 2 of RadLens to detect several different accessory bones that I commonly encounter in my work.

I want to build a web application with the following features:
  1. Include images of fracture types from version 1 of the application, as well as 6 images of accessory bones/anatomical forms added in version 2.
  2. Use the device’s camera to scan suspected fracture sites in real-time (for version 1) or accessory bones/anatomical forms (for version 2).
  3. Automatically direct me to Google Image Search to cross-reference suspected “fracture” or “accessory bone/anatomical form” images.

Here are the details of the two versions or iterations of RadLens:

RadLens (version 1) is a web application that can use your device’s camera to scan X-ray images and attempt to predict the type of fracture shown in the image (if a fracture is present), without uploading any images to the cloud, thus protecting patient privacy. If a certain type of fracture trained in the system is found, the application will return a score indicating its confidence in the fracture classification. Since all inference is performed on the local computer, there is no need to use a server during classification, making the process even faster. Importantly, RadLens also returns a link to Google Image Search, allowing me to browse additional images of such fractures and use them to assist in diagnosis.

RadLens powered by TensorFlow 2.0

The initial version of RadLens focuses on classifying two types of fractures (for example, the Monteggia fracture and Galeazzi fracture). Here is the user experience flow:

Leveraging TensorFlow.js in Medical Imaging

Using RadLens version 1, left: using the phone camera to scan the X-ray image of the forearm in real-time; middle: RadLens classifies the fracture type as Monteggia fracture or Galeazzi fracture; right: by clicking the hyperlink, browse Google Image Search to find images of the detected fracture type for cross-referencing

  • Monteggia fracturehttps://radiopaedia.org/articles/monteggia-fracture-dislocation?lang=us

  • Galeazzi fracturehttps://radiopaedia.org/articles/galeazzi-fracture-dislocation?lang=us

The hyperlink to Google Image Search is very important, especially when training RadLens on a small dataset. Adding this hyperlink makes the tool interactive during use, allowing me to assess its accuracy at any time and use any mistakes it may make to guide my future work: I can study similar cases and verify whether the model is correct, or decide whether to collect additional training data and whether to consult other medical professionals.

Building RadLens

I did not develop an AI model as accurate as a radiologist but decided to focus on building a small model to help me search references faster. To build the first version of RadLens (for fractures), my initial prototype was coded using Python with TensorFlow and trained a new model using a technique called transfer learning to reuse some features learned by other models trained on large datasets. After some time of experimentation, I decided to adopt a simpler, broader approach.

I discovered Google’s Teachable Machine website, which supports computers in recognizing your own images, sounds, and poses. If you wish, you can even upload training data using the UI for real-time training in the web browser. With the model generated by Teachable Machine, I created the second prototype of RadLens (for accessory bones/anatomical forms). Teachable Machine is great for building simple interactive prototypes, helping radiologists understand the potential use cases of ML in their work. Currently, the process I chose for training the ML model may not be suitable for building clinical-grade applications (as this requires a team of computer scientists and physicians to handle more data), but it has well achieved my goal of helping other radiologists understand the assistive role of ML in their daily work.

  • Teachable Machinehttps://teachablemachine.withgoogle.com/

RadLens

Another benefit of using Teachable Machine is that it allows me to use only one programming language (JavaScript) in the project instead of two (Python and JavaScript). Even more surprisingly, I can perform inference in the web browser using TensorFlow.js, which runs on laptops or mobile devices; this means patient data will always remain private and will never be uploaded to a server for classification, and since there is no round-trip time to the server, inference speed will be faster.

Leveraging TensorFlow.js in Medical Imaging

RadLens’s second prototype: If a foot X-ray image is detected, the application will ask you to zoom in on the image. After the image is enlarged, the application will attempt to infer the specific classification of the accessory bone. You can then start searching for images of the predicted classification. The base code for the RadLens prototype has been published on GitHub.

  • GitHubhttps://github.com/RadEdje/RadLens2.0

Looking Ahead

Currently, most ML applications for healthcare services are pre-packaged solutions that, while powerful, also have many limitations. Since models must remain within central IT systems, these systems are large in file size and limited in deployment. Additionally, these solutions can be very expensive, affordable only to large hospitals and clinics. Furthermore, since these solutions are pre-trained and packaged, it is challenging for local radiologists to retrain them for their local practical work needs. Fundamentally, my vision is to enable local radiologists to adopt cost-effective ML technologies, and building proof-of-concept systems is much easier than I initially imagined.

In the future, I hope to improve this web application further by adding object detection features to clearly outline observed fractures or accessory bones, thus enhancing its functionality. Currently, this application can only perform image classification, meaning it can detect the type present but cannot indicate the location.

I have learned that ML is a technology that can scale both horizontally and vertically. The coverage of horizontal applications is already very extensive. Typically, such results benefit from the collaborative efforts of large teams of AI experts who have accumulated vast experience in the medical applications of computer vision. I hope to inspire interest in vertical scaling of AI development, as such applications are more targeted and can meet the specific use case needs of radiologists worldwide. In my view, to help the public easily attempt to develop machine learning applications in their respective fields, the web and TensorFlow.js are the best ways to do so.

For more information, please click “Read Original” to visit the official website.

— Recommended Reading —

Leveraging TensorFlow.js in Medical Imaging

Leveraging TensorFlow.js in Medical Imaging

Leveraging TensorFlow.js in Medical Imaging

Leveraging TensorFlow.js in Medical Imaging

Leveraging TensorFlow.js in Medical Imaging

Leave a Comment