Home » Sport » Creating ball tracking data using basketball game footage | BLOG – DeNA Engineering

Creating ball tracking data using basketball game footage | BLOG – DeNA Engineering

Introduction

My name is Zhao Jae-bin and I participated in the 2024 DeNA AI Specialist Summer Internship. I usually do research on the fairness (bias) of AI.

In this internship, I worked on the task of creating basketball ball tracking data, and also verified what kind of analysis and feedback to the team could be done using this data.

Data utilization in basketball

DeNA operates the Kawasaki Brave Thunders, a professional basketball team. Currently, the use of data to strengthen teams is widely used in various sports, and DeNA also uses data to strengthen teams.

Utilizing data to strengthen Kawasaki Brave Thunders

We are working on this. In overseas leagues such as the NBA, a wide range of analyzes are realized using data called “tracking data.”

Tracking data is time-series data of player/ball positions as shown below. In the NBA in the United States, dedicated equipment is permanently installed at match venues to obtain this tracking data, and it is possible to collect information such as ball possession rate, passing information, etc.

Other various analyzes

It is useful for.

However, Japan’s B League, a professional basketball league, does not currently have a system in place to acquire tracking data, and the data that can be used for analysis is limited. Therefore, DeNA is attempting to create tracking data from match footage in order to achieve more diverse analysis.

About internship assignments

During this internship, I worked on the task of “creating ball tracking data” (see the figure below).
Here we will introduce the difficulties of ball tracking and related research.

overview

Current issues

While tracking data is being created, this time we developed ball tracking data.
Compared to player tracking, ball tracking has the following difficulties:

  • Because the ball is smaller in the image than the player, it is relatively difficult to detect and is often subject to occlusion.
  • While the players are in contact with the court, the height of the ball must be considered.

Related research

Ball 3D Localization[1] The study shows that it is theoretically possible to create ball tracking data by comparing the size of the ball in the image (in pixels) and the actual ball size (approximately 24.5 cm). However, when this method was actually applied, it was found that slight differences in the ball size in the image caused significant fluctuations in tracking. In the image below, the ball position changes by more than 3 meters just by changing the ball size by 1 pixel, which makes it difficult to track with an accuracy that can be used in practical applications.

related work

Proposed approach

Attention to the handler

The related work mentioned above is based on ball detection. However, I thought that the current issues could be resolved by focusing on the player holding the ball, the handler, rather than the ball itself.
Focusing on the handler rather than the ball has the following benefits:

  • Objects become players, making them easier to detect. However, handler detection requires attention to the ball, so the essential task is the same as ball detection.
  • By changing the detection target to the player, there is no need to consider height.

Based on these advantages, we proposed a handler-based ball tracking approach.

framework

The framework of the proposed approach is as follows. The input is a frame image of the match video.

  1. (Top center) Performs handler detection on the input image. The model used was a fine-tuned ObjectDetection model.
  2. (Bottom center) Perform coat line detection and calculate a homography matrix to make the coat in the image correspond to the actual coat.
  3. (Right) Since the bottom edge of the handler’s bounding box is in contact with the ground, the ball position (tracking data) on the actual court is obtained by performing coordinate transformation using the center of the bottom edge as the ball position.

framework

What to do when no handler is detected

This approach calculates the ball position from the detected handler. However, this does not allow us to obtain the ball position when the handler is not detected (eg, when the ball is being passed). Therefore, for frames where no handler was detected, linear interpolation was performed using the frames before and after the handler was detected.

We chose a very simple method called linear interpolation because the ball moves in a straight line on the court unless it receives an external force from the player. Conversely, if the ball does not move in a straight line, the ball is in contact with the player, and the player can be detected as the handler and the ball position can be obtained.

demo

Here is a demo of ball tracking data created using the above approach.
(Visualization combined with player tracking data)

Analysis using ball tracking data

Finally, we will introduce the possibilities of what kind of analysis can be achieved in the future using the ball tracking data we created, using video images.

Ball travel distance/expected score calculation

(Bottom left of video) The distance the ball travels can quantify the offensive ball movement, which can be easily calculated using ball tracking data.

(Bottom right of video) By combining each team’s shot success rate data on the court, called a shot chart, and ball tracking data, it is possible to calculate the expected score when shooting from the current ball position.

Example of analysis using expected score value

By using the above-mentioned expected score value, quantitative analysis of passes can also be achieved. For example, in the image below, of the four players (red bounding box) that the player with the ball (blue bounding box) can pass to, the player near the paint area has the highest expected score. By analyzing the optimal passing partner in this way, it will be possible to provide feedback to the team regarding “good passes” that will lead to more points.

*Due to time constraints, we are calculating the expected score from ball tracking data only. However, in reality, defense positions and player information are also important elements, and we believe that integrating player tracking data will lead to more reliable analysis.

pass_analysis

Summary/future outlook

In this internship, I took on the challenge of creating basketball ball tracking data, and by focusing on the handler rather than the ball itself, I solved the traditional problem and achieved this using a simple approach. We also verified what kind of analysis is possible using ball tracking data, and I think we were able to demonstrate the future possibilities of basketball x data utilization.

There are two major prospects for the future.

  • This time, we are applying the ObjectDetection model for each frame, but SAM2[2]We believe that more accurate handler detection will be possible by using a model that calculates attention between frames.
  • We believe that by combining ball/player tracking, we can achieve even more granular analysis.

Conclusion

I remember being very nervous at first because the internship assignment I was given this time was in a completely different field from my usual research. However, thanks to the close cooperation of my mentors, Mr. Yoshikawa and Mr. Yanagbe, I was able to catch up on the field, overcome obstacles in my approach, and discuss experimental results, so I was able to produce some solid results. I would like to thank both of my mentors for their help.
Additionally, this year I had many opportunities to attend events and work in the office, so I was able to interact with many employees. Everyone was friendly and easy to talk to, and I was able to feel once again the free atmosphere of DeNA.

I was able to gain a variety of experiences through this internship, and it was an exciting four weeks that passed by quickly. Thank you again to all the mentors and employees who got involved.

References

[1] Gabriel Van Zandycke, et al. “Ball 3D Localization From A Single Calibrated Image”, arXiv preprint arXiv:2204.00003, 2022.

[2] Ravi, Nikhila, et al., “SAM 2: Segment Anything in Images and Videos”, arXiv preprint arXiv:2408.00714, 2024.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.