• Discover new ways to elevate your game with the updated DGCourseReview app!
    It's entirely free and enhanced with features shaped by user feedback to ensure your best experience on the course. (App Store or Google Play)

2D to 3D AI-driven real-time form biomechanics app

Awesome work & cool to see GG in there. Since you're out of the gates on this I had a couple thoughts about this to take it from outstanding to truly exceptional and you've got yourself a side hobby or career in the form industry.

Related: I've been sharing some lively discussions with SocraDeez and wanted to orient you to this article in the Hardball times. It alerts us to some of the potential advantages to modeling, what to model, and some of the ways it can challenge common misconceptions. I haven't yet caught up on how things are looking on the pitching modeling front compared to these arguments in 2022, but this was a pivotal piece. I do think there are a few things to debate about in the pendulum conception of the swing.

Thank you very much! I sincerely appreciate the feedback. The article you linked to is really interesting. As I have time, I'm going to dig in deeper and try to catch up on pitching modeling. I haven't done this sort of thing in a long time, but trying to model a disc golf throw would be fun and interesting. (And a nice break from the day job!)

In these video-extracted models & in general, we really ideally would include the nose of the disc ripping around as the end of the compound pendulum.

Alternatively or in addition, if some inference about release speed/velocity could be extracted from the images, you've really got something top notch there.

Being able to track the nose of the disc would be great, but, unfortunately, I think that would be difficult form a technical perspective. I'll have to think about ways to do it. Inferring the release speed of the disc itself should be much more feasible. I'll think on that one too.

Even if not, these timecourses alone are already fascinating and could teach us alot. If you're getting extractions and can post any I'd be interested in breaking it down at some point.

I quickly dumped the data I have into a JSON file: Form-Data_SL-GG-PP.json

This data includes the position of each body part in each phrase of the throw for a throw by Simon, GG, and Paige. If anyone has any questions or needs the data in a different format, please let me know and I would be happy to help.

Here's the clip of Paige, which I hadn't shared yet:

giphy-downsized-large.gif


*Also some potential 2D disadvantage there unless I'm mistaken - hand peak speed should be high following the elbow, but I think because we've only got one plane it's peak appears to start and die before the elbow

This is a great point, thanks very much for raising it. I had been so focused on the programming that I didn't notice that the speed was wrong. This past week, I worked on using VideoPose3D to infer 3D positions. The result is a full 3D skeleton of the throw and much better speed estimates.

Here is an example with the GG clip:
giphy.gif


Much better, yeah? And a lot more to dig into here. This data is also included in the JSON file I linked above.
 
Thank you very much! I sincerely appreciate the feedback. The article you linked to is really interesting. As I have time, I'm going to dig in deeper and try to catch up on pitching modeling. I haven't done this sort of thing in a long time, but trying to model a disc golf throw would be fun and interesting. (And a nice break from the day job!)

Being able to track the nose of the disc would be great, but, unfortunately, I think that would be difficult form a technical perspective. I'll have to think about ways to do it. Inferring the release speed of the disc itself should be much more feasible. I'll think on that one too.

I quickly dumped the data I have into a JSON file: Form-Data_SL-GG-PP.json

This data includes the position of each body part in each phrase of the throw for a throw by Simon, GG, and Paige. If anyone has any questions or needs the data in a different format, please let me know and I would be happy to help.

Here's the clip of Paige, which I hadn't shared yet:

giphy-downsized-large.gif




This is a great point, thanks very much for raising it. I had been so focused on the programming that I didn't notice that the speed was wrong. This past week, I worked on using VideoPose3D to infer 3D positions. The result is a full 3D skeleton of the throw and much better speed estimates.

Here is an example with the GG clip:
giphy.gif


Much better, yeah? And a lot more to dig into here. This data is also included in the JSON file I linked above.

I can't tell you how excited I am about this. On the disc nose, I'd assume it's tricky, but if a player has a reasonably good grip the model still reveals a lot about the motion pattern and potential sources of lag and power.

You probably are on the cusp of something you could use for serious business should you so choose, especially if there's a version that allows a player to feed in their own video.

The 3D inference is of course approximate but definitely seems to be on track. I'd be curious how any differences in the estimated vs. actual throw affect the lag between the elbow and the hand in particular, and some of the dimensionality looks a little off, but there's at least a solid guess of the fundamental motion there.

Anecdotally, just being able to see the 3D skeleton moving is, to me, profoundly powerful. It really helps you see how much of the primary force of the move is a lateral shift to the target, but it involves that big rotational torque force in a spiral.

Going to look closer when I get a chance. Thank you so much for doing this!
 
I can't tell you how excited I am about this. On the disc nose, I'd assume it's tricky, but if a player has a reasonably good grip the model still reveals a lot about the motion pattern and potential sources of lag and power.

You probably are on the cusp of something you could use for serious business should you so choose, especially if there's a version that allows a player to feed in their own video.

The 3D inference is of course approximate but definitely seems to be on track. I'd be curious how any differences in the estimated vs. actual throw affect the lag between the elbow and the hand in particular, and some of the dimensionality looks a little off, but there's at least a solid guess of the fundamental motion there.

Anecdotally, just being able to see the 3D skeleton moving is, to me, profoundly powerful. It really helps you see how much of the primary force of the move is a lateral shift to the target, but it involves that big rotational torque force in a spiral.

Going to look closer when I get a chance. Thank you so much for doing this!

I like the idea of players being able to use the software on their own videos. I can get new videos rendered pretty quickly and easily. Here's a quick one I did of my first form check, which took less than 30 seconds:

giphy.gif


If people tag me in their form threads sometimes, I'd be happy to try to generate something for them. The biggest limitation is that the method requires a reasonably high frame rate or slow motion video. (Otherwise the arms can get too blurry to track.) Ideally, I'd like something more production ready that people can easily use themselves. Some good news on that front -- I just did a quick test and was able to run the software on a CPU instead of a GPU. It took quite a bit longer (over 2 minutes) but that bodes well for being able to use the software through a mobile app or website.

As for the 3D inference, I'm also really curious about where there are material differences between the the estimated and actual throw. It would be great if direct motion capture data, like Chris Taylor collected with Ezra and Tristan, were available. That would give a great benchmark to compare against. And, if there were motion capture data of many different throws, I could pretty easily train a disc golf-specific model that would perform much better than the current one.

At least for now, if I have time, I may try exporting my 3D GG data into 3D modeling/animation software. That would at least allow for a better look at the estimate.

Anyway, there are a lot of interesting directions to take this project. Wish I had more time to follow them all!
 
I like the idea of players being able to use the software on their own videos. I can get new videos rendered pretty quickly and easily. Here's a quick one I did of my first form check, which took less than 30 seconds:

giphy.gif


If people tag me in their form threads sometimes, I'd be happy to try to generate something for them. The biggest limitation is that the method requires a reasonably high frame rate or slow motion video. (Otherwise the arms can get too blurry to track.) Ideally, I'd like something more production ready that people can easily use themselves. Some good news on that front -- I just did a quick test and was able to run the software on a CPU instead of a GPU. It took quite a bit longer (over 2 minutes) but that bodes well for being able to use the software through a mobile app or website.

As for the 3D inference, I'm also really curious about where there are material differences between the the estimated and actual throw. It would be great if direct motion capture data, like Chris Taylor collected with Ezra and Tristan, were available. That would give a great benchmark to compare against. And, if there were motion capture data of many different throws, I could pretty easily train a disc golf-specific model that would perform much better than the current one.

At least for now, if I have time, I may try exporting my 3D GG data into 3D modeling/animation software. That would at least allow for a better look at the estimate.

Anyway, there are a lot of interesting directions to take this project. Wish I had more time to follow them all!

2 mins is still pretty good because I assume anyone doing it is pretty serious about form (or just wants a pretty visualization). If you end up with a shareable architecture I'd be interested in trying to boot up your modified 3D+timecourse inference stuff and add some up here.

Don't want to ask you for a lot of work, but if you feel like it:

Is the frame rate on this one good enough or no?

For general purposes, since it's used so often for coaching here and there are not many good resources for high level standstills, seabas22 slo mo backhand at ~0:34 might be good.

I agree about the 2D->3D validation, of course that would probably require some significant investment of effort/time/money.

I always encourage more analysis of GG since he's so vertical.
 
2 mins is still pretty good because I assume anyone doing it is pretty serious about form (or just wants a pretty visualization). If you end up with a shareable architecture I'd be interested in trying to boot up your modified 3D+timecourse inference stuff and add some up here.

Don't want to ask you for a lot of work, but if you feel like it:

Is the frame rate on this one good enough or no?

For general purposes, since it's used so often for coaching here and there are not many good resources for high level standstills, seabas22 slo mo backhand at ~0:34 might be good.

I agree about the 2D->3D validation, of course that would probably require some significant investment of effort/time/money.

I always encourage more analysis of GG since he's so vertical.

Sorry for the delay! I unfortunately haven't had much time this week to work on this.

The model almost worked with your video, but lost track of your arm a bit in the swing.

giphy-downsized-large.gif


The issue happened right around here:

attachment.php


The model worked a lot better with SW's 240 fps video. I went ahead and ran the full 3D inference:



The 3D inference also has some issues with the swing. But I think I could help that with some interpolation and smoothing (currently only doing that in the 2D representation).
 

Attachments

  • Screenshot from 2022-10-28 11-36-11.jpg
    Screenshot from 2022-10-28 11-36-11.jpg
    20.4 KB · Views: 121
Sorry for the delay! I unfortunately haven't had much time this week to work on this.

Super cool man, thanks for entertaining this. Those skeletons are very interesting and reveal some posture dynamics that aren't necessarily easy for people to see. That little subtle pendulum swing of the body and precession of the spine in particular.

In the SW22 wireframe it looks like it struggled a bit with his arm as it was entering follow through + going behind his body. Up until that point the 3D skeleton looks like it's picking out the majority of the leg, hip, and spine action that's a little harder to see without the skeleton inference, so I think that's already a boon. I'm curious what kind of interpolation can solve that 2D-3D issue in any case.

For proof of concept definitely looks like high frame rate is needed for when the disc is exiting the pocket. I think this video yielded 240 (sorry, epilepsy warning, Halloween fluorescent horror-show flicker is present). I was wondering if higher resolution would help your model with the 3D arm inference on top of that but if the flickers render it unusable, I can retake in natural light.
 
Super cool man, thanks for entertaining this. Those skeletons are very interesting and reveal some posture dynamics that aren't necessarily easy for people to see. That little subtle pendulum swing of the body and precession of the spine in particular.

In the SW22 wireframe it looks like it struggled a bit with his arm as it was entering follow through + going behind his body. Up until that point the 3D skeleton looks like it's picking out the majority of the leg, hip, and spine action that's a little harder to see without the skeleton inference, so I think that's already a boon. I'm curious what kind of interpolation can solve that 2D-3D issue in any case.

For proof of concept definitely looks like high frame rate is needed for when the disc is exiting the pocket. I think this video yielded 240 (sorry, epilepsy warning, Halloween fluorescent horror-show flicker is present). I was wondering if higher resolution would help your model with the 3D arm inference on top of that but if the flickers render it unusable, I can retake in natural light.

EPILEPSY WARNING! hahaha
for sure.

I would honestly thing the standard 120 FPS 4k from a gopro would probably work the best.
Or the higher frame rates we are seeing from the newer cell phones with the clarity.

These sorts of things are interesting how they work. They like frames to work with, but they also like good focus clarity and good definable edges as well.
So if you wear colors or are lit in a way that it blends you together to much, they can get tracking loss.

But realistically, we could use markers to help identify.

I forgot which school it was, but tehy were building some stuff like this. Unfortunately the people tehy were building data from that I saw in the IG post were... not anywhere close to good mechanics.
 
The 3D inference also has some issues with the swing. But I think I could help that with some interpolation and smoothing (currently only doing that in the 2D representation).

Ok, tested on my better phone camera at 120 fps here. The flicker is mostly gone if you want to see can see how this framerate and resolution work out:

 
Ok, tested on my better phone camera at 120 fps here. The flicker is mostly gone if you want to see can see how this framerate and resolution work out:

Oops, wish I had checked the forums before hitting render button.

To not bury the lead, here are the results on the 240fps video (I was able to mostly fix the flickering):



I think this is easily the best result yet. The high frame rate makes the motion easy to track. I also implemented the interpolation and smoothing on the 3D inference that I had mentioned earlier.

Basically, the interpolation works like this: AlphaPose produces confidence values for the position of each joint in each frame. I removed all position data where the confidence value was lower than a set threshold and then did a simple linear interpolation between the remaining points. (There's definitely a smarter approach here than linear interpolation, but that's a problem for future me to deal with.) I previously had this working in the 2D version only, but now the 3D version has it too. Also, I added the ability to manually remove and then interpolate over specific problematic frames.

These sorts of things are interesting how they work. They like frames to work with, but they also like good focus clarity and good definable edges as well.
So if you wear colors or are lit in a way that it blends you together to much, they can get tracking loss.

In addition to the frame rate, I think this is key for the model to work well. Having a reasonable amount of contrast between the thrower and background is essential.

Assuming good contrast, my guess for the minimum requirements for a decent result is 480p/60fps. But probably want something like 720p/120fps to be safe.

Anyway, how we feeling about this? I'm happy to provide the raw video files, data, tinker with how things are displayed, or do whatever else people find helpful. The code is unfortunately a complete mess. But, whenever I get good chunk of time, I can try to clean it up such that others could use it.
 
Oops, wish I had checked the forums before hitting render button.

To not bury the lead, here are the results on the 240fps video (I was able to mostly fix the flickering):

...

I think this is easily the best result yet. The high frame rate makes the motion easy to track. I also implemented the interpolation and smoothing on the 3D inference that I had mentioned earlier.

Nice! I agree that this looks better. It's doing an impressive job inferring how each arm is moving as it goes behind the body. The only areas I see it struggling are (1) tracking the throwing wrist as it's roughly directly back behind me/away from the camera - the depth gets confused for a moment and similarly (2) as the front leg shin angle goes vertical, it appears to have some trouble tracking the leading knee and hip joint in the depth dimension away from the camera. I think the extent to which it's worth time fixing things like that depends on the goal.

Potential goals.

1) Academic. Getting a skeletonized motion model in 3D can provide an abstraction of the core throwing motions and a basis for comparing motion patterns & their outcomes (disc release speed, "smash factor" or whatever one is interested in). This approach is going to miss important details of the biology and the Center of Gravity, but is still useful in the big picture. Maybe a CoG could be inferred to some degree of precision, but that's tricky because we all carry mass differently on our skeletons or abstract wireframes.

Empirically, these could be powerful tools to potentially show players (1) what people have in common in top form (I think most of it is written in DGCR, but these wireframes make some of it much easier to see and (2) how changes or potential changes could impact speed & consistency.

2) Coaching. To me, these 2D and 3D wireframes illustrate all kinds of things I could sorta see, but were really hard to pick out until recently. I think beginners lack the conceptual eye and these wireframes can help. They make a few things really obvious, including the way the whole body rocks and swings and shifts (look at the spine and pelvis in the wireframes), the fundamental lateral shift and how the pelvis moves in the context of that shift, the way the plant and movement through braced legs causes the tilted spiral and rotation into followthrough, and so on. With players absolutely drowning in information (much of it bad or redundant), these could help a coach "cut to the chase" about what the primary motion pattern looks like and get people focused on the big picture when they're swimming in details. The fundamental motion is remarkably similar across people. Pointing out how a change in a player's wireframe would make it more like a top pro's overall wireframe motion could be very powerful.

3) Customization. Should you use a Gibson off arm or a GG off arm? Should you hop vertically or horizontally? Can you get a little more late acceleration with a Simon or a GG move into the pocket? I don't think wireframes like this will answer all of those questions (in the end, "just do it and see what happens" prevails), but the idea that you can started to see and think about what might work for your body is cool. And most good models allow you to specify alternate starting positions and dynamics to run theoretical experiments - maybe Player A finds it really hard to do the GG off arm due to their learning history, but the model suggests they should bear with it if crushing is their goal. A model helps you think about changes in the context of that player's overall swing. Monday morning motivation/speculation.


Basically, the interpolation works like this: AlphaPose produces confidence values for the position of each joint in each frame. I removed all position data where the confidence value was lower than a set threshold and then did a simple linear interpolation between the remaining points. (There's definitely a smarter approach here than linear interpolation, but that's a problem for future me to deal with.) I previously had this working in the 2D version only, but now the 3D version has it too. Also, I added the ability to manually remove and then interpolate over specific problematic frames.

This really does seem to work well overall without a fancier interpolation. I guess if a different one rescues a couple of the minor issues that remain it might be worth it if it's not too expensive depending on the goal for this little project.


In addition to the frame rate, I think this is key for the model to work well. Having a reasonable amount of contrast between the thrower and background is essential.

Assuming good contrast, my guess for the minimum requirements for a decent result is 480p/60fps. But probably want something like 720p/120fps to be safe.

Makes sense, just by eye the 120fps with good background contrast still certainly minimized the blurring, though I'd be curious when it struggles for a big arm. Even in my 240 fps video there's a point where it starts to blur a bit moving through the hit/release. But pragmatically as long as it's good enough to not botch the wrist tracking it's probably ok. For my money I'd say the fidelity of the wrist tracking and inferring release speed (if possible, though a radar gun is probably best in any case) are important. Tracking the nose coming around is a compromise between the two for academic purposes. For coaching purposes, just knowing how fast the dang thing came out may prevail to most people.


Anyway, how we feeling about this? I'm happy to provide the raw video files, data, tinker with how things are displayed, or do whatever else people find helpful. The code is unfortunately a complete mess. But, whenever I get good chunk of time, I can try to clean it up such that others could use it.

I mean I'm just giddy, it's super cool and illustrates many things we've read about here in real time. I think most players want more speed/consistency when they get serious about form work, so the ability to yoke the wireframes to that is important. I'm assuming that the ability to identify changes that cause better lagged acceleration that you can see in the timecourses is the main important idea. That's because if the fundamental motion pattern is good and creating a nice, lagged acceleration spike with a sudden late peak moving through the wrist, then the battle is about making that spike process better, and learning to add more momentum and shift coming into the swing. I'm kind of bummed that it seems hard to get the late rotational aspect of GG or Simon's forearm supinating into the hit which is important to the ejection speed, but it's hard to get it all!

I'm also not sure we've solved the problem of the "best" swing for each player, and that's where a major innovation could be.

For individual coaching, making it a useable package is already obviously appealing IMHO. Just having a few pro examples to compare to is already outstanding :)
 
Last edited:
Potential goals.
...

This is good outline for potential goals. It seems to me that the primary focus should be Coaching for now because more people using this software means more data to analyze and pressure test the model, which would also help advance the other two goals. It would probably be the most immediately useful for people too. As someone who has not quite developed the conceptual eye you mentioned, seeing the wire frames in motion is definitely super helpful.

Practically speaking, another reason to focus on the Coaching angle for now is that it would require less development work in the near term. Finding ways to model things like disc speed/smash factor should be possible, but it would involve developing different models than current one. As for estimating CoG, it looks like having a decent 3D pose estimation model puts us 1/3 of the way there! (Kaichi et al. did something very similar in Estimation of Center of Mass for Sports Scene Using Weighted Visual Hull.) Maybe that will be my project for 2023. ;)

I mean I'm just giddy, it's super cool and illustrates many things we've read about here in real time. I think most players want more speed/consistency when they get serious about form work, so the ability to yoke the wireframes to that is important. I'm assuming that the ability to identify changes that cause better lagged acceleration that you can see in the timecourses is the main important idea. That's because if the fundamental motion pattern is good and creating a nice, lagged acceleration spike with a sudden late peak moving through the wrist, then the battle is about making that spike process better, and learning to add more momentum and shift coming into the swing. I'm kind of bummed that it seems hard to get the late rotational aspect of GG or Simon's forearm supinating into the hit which is important to the ejection speed, but it's hard to get it all!

I'm also not sure we've solved the problem of the "best" swing for each player, and that's where a major innovation could be.

For individual coaching, making it a useable package is already obviously appealing IMHO. Just having a few pro examples to compare to is already outstanding :)

Thanks for your encouragement here! I'm glad someone at least is finding this useful. I'm going to shift my effort into getting all of this into a usable package that others can use and will post it to Github. (A website or app would probably be ideal, but that maybe that's my project for 2023.).

That might take awhile. So, first, I'm going to create a set of visualizations with different pros. I'll redo the old ones I made of GG, Simon, and Paige. But I'd be happy to hear other suggestions. (Maybe Ella Hansen? Bradley Williams?)

Also, I'm happy to take suggestions on what the visualizations should look like. I've mostly just been quickly throwing things together so far. Given the importance of the timecourses, I should probably make them more prominent. The software also has some visualization features that I haven't really shown yet. For example, it can show timecourses for any joint tracked by the model, not just the wrist, elbow, and shoulder. It can also draw the path of a joint's motion:

giphy.gif
 
Practically speaking, another reason to focus on the Coaching angle for now is that it would require less development work in the near term. Finding ways to model things like disc speed/smash factor should be possible, but it would involve developing different models than current one.

It would be quite interesting for sure if we can use the angle data measured with a level of movement swing to calculate an exit velocity.
 
I think the joint tracking in 3D is awesome! Recall:

CVdKLZq.png



Yeah, the Coaching goal is probably the way to go. Most immediate utility & if it's easier on you it's win-win. I'm happy to be a package tester. "CoG in 2023" is a great slogan.

For pro Wireframes, I'll suggest a bunch depending on how much time you want to put in. There's some method to my madness. I wanted to get some variability in body types and horizontal/vertical components and a few quirks. I also wanted to make sure there were a couple examples of mechanically classic throwers who are not as visible these days (Brinster, classic Schusterick, Feldberg). I wanted a couple more standstills in there since they're so important to learning mechanics.


Standstills - these are still really hard to find in good camera angles

-Already got seabas above.

Corey Ellis
- I want this & there are many examples, but I can't find a good close to 90 degrees to tee on coverage yet.

Schusterick (buyer beware - I misunderstood how this works for a long time and got hurt trying it - see older notes from SW22 on this vs. his):


Kuoksa



X-Steps
Classics

Brinster
-Want it, but can't find great shot near 90-degrees to tee.

Papa pendulum







Younger freewheeling pros

Young simon




Young mcbeth





Mature modern pro form

Wysocki


Gibson


Williams


Tamm


McMahon


McBeth - one at ~2:00 is pretty good and before everything was so tightened up
 
Okay, this is odd interest here.
Cause Classic Shoestring was considered peak form at one point.


Schusterick (buyer beware - I misunderstood how this works for a long time and got hurt trying it - see older notes from SW22 on this vs. his)

I agree that it was. I misunderstood it, but I was not criticizing his form. I intended to mean that it can be hard to learn from a new player perspective. I will clarify exactly what I misunderstood and why I linked back to what SW22 thought about it (which I discovered later) long before I got deep into the rabbit hole.

Based on form critiques it seems like at least these two things often confuse people about Will's standstill:

1) how the backswing load works, including him coming back toward the rear leg before shifting forward and

2) how his weight shift works. At first it can look like it's highly rotational due to how his wiry legs and feet move, even though it's still a lateral stride and shift from behind.


So to be clear, I think the form is "good," but can hard to learn for those reasons. :)
 
I agree that it was. I misunderstood it, but I was not criticizing his form. I intended to mean that it can be hard to learn from a new player perspective. I will clarify exactly what I misunderstood and why I linked back to what SW22 thought about it (which I discovered later) long before I got deep into the rabbit hole.

Based on form critiques it seems like at least these two things often confuse people about Will's standstill:

1) how the backswing load works, including him coming back toward the rear leg before shifting forward and

2) how his weight shift works. At first it can look like it's highly rotational due to how his wiry legs and feet move, even though it's still a lateral stride and shift from behind.


So to be clear, I think the form is "good," but can hard to learn for those reasons. :)

I'm not honestly sure if anyone really throws like that now anyways?
I don't pay "that" much attention to pros' anymore.

His methods at one time worked really well.

I should ask Harper if I can snag some video of him throwing full power then also. Because shoestring taught him to throw back when he was more in his hay day, and harper been playing since he was like 10 or something.
It's so fun to watch him keeping up with elite players over the last few years, i think he just turned 17 this year. And nothing like pro tour players getting out drove by a 15-16 year old.

So if this is something ya'll would like, I'm sure that harpers form is going to be really reminiscent of Wills old form. And I can just ask him to meet up and take as high of quality video as ya'll want.
 
Referring to whatever we want to call the disc leveraging out off of the last point of contact, which should ideally result in the disc moving faster than the hand (though curious if you disagree, or what else we'd call it to not confound it with ball elasticity at contact):

UMwAYNi.png
I'm new to this forum. I love it so far. Isn't smash factor in golf just the ball speed divided by the clubhead speed? I've played traditional golf my entire life. I took a break and played disc golf for 2 years about 10 years ago, and I just got back into it a few months ago and it's all I want to do now and it's all I think about. If there's a way to calculate the difference in the hand speed or lever speed and the speed of the disc. I feel like we would need a Trackman for disc golf and would need to put those tiny metal stickers all over the disc to measure it's spin and speed, etc. I love this discussion and this entire forum! I'm desperately working on my form right now and thinking about posting a video in the form review section. I have a few flaws that I really need to work on!
 
So, I'm stuck without my desktop for a few weeks. That will unfortunately delay my ability to create the videos, as I'll need to try to get the software running on my laptop. Two silver linings here: 1) this is a good opportunity to see how usable the software is on a computer without a graphics card; and 2) it gives me some time to work on a related side project.

After thinking about it for a bit, I realized that tracking the disc shouldn't be that difficult. And, if we can track the disc's movement frame-to-frame, we can also (very) roughly estimate the disc speed because we know the size of the disc and the frame rate of the video.

The result:

giphy.gif




Isn't smash factor in golf just the ball speed divided by the clubhead speed? ... If there's a way to calculate the difference in the hand speed or lever speed and the speed of the disc.

Welcome to the form! I'm glad that you're enjoying it so far.

As for calculating the "disc golf" smash factor, what if we did something like divide the disc speed (estimated using the above model) by the wrist speed (estimated using the pose estimation model)? I think that would work and be useful.
 
So, I'm stuck without my desktop for a few weeks. That will unfortunately delay my ability to create the videos, as I'll need to try to get the software running on my laptop. Two silver linings here: 1) this is a good opportunity to see how usable the software is on a computer without a graphics card; and 2) it gives me some time to work on a related side project.

After thinking about it for a bit, I realized that tracking the disc shouldn't be that difficult. And, if we can track the disc's movement frame-to-frame, we can also (very) roughly estimate the disc speed because we know the size of the disc and the frame rate of the video.

The result:

giphy.gif






Welcome to the form! I'm glad that you're enjoying it so far.

As for calculating the "disc golf" smash factor, what if we did something like divide the disc speed (estimated using the above model) by the wrist speed (estimated using the pose estimation model)? I think that would work and be useful.

Disc speed/wrist speed might be good. I think the tip of the index finger/last point of contact would be ideal but I suspect that's going to be a harder problem.

Brilliant solution to speed inference. And in fact, you could use some of the Gatekeeper Media coverage if there are any at 90 degrees to the tee to check the estimate against what their radar measurement says :)
 

Latest posts

Top