This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
TensorFlow, AI, Autopilot, VendeeGlobe
#11
Using cameras to see objects that they can alarm humans about is different from using cameras to enhance the autopilot.
Reply
#12
(2023-10-25, 07:33 PM)jim321 Wrote: scrape the web for pics of buoys, boats, ships.
how about ais data too. get course speed "et cetera".
Charts ?

Hi Jim, Thanks for your enthusiasm and the excellent suggestions.

My thoughts... I sail the Solent and I don't think it will take too long to get a good database of pictures of navigational marks. I get tha you could "scrape the web" but as a designer by profession I am aware thats exactly what Midjourney did and I feel that it is wrong to profit even in a non-monetary sense from other peoples data. It could also lead to copywrite issues down the line too.

So I would much rather create the database from my own images and from those sailors interested in the project who are happy to contribute to the collection effort. One such a database is ready it could be used for assisting a self navigating ship to stick to buoyed channels, go the correct side of Cardinal marks etc.
I am thinking about how you would display the information and I guess one way would be to insert data into the NMEA AIS stream. This would also work well for boats that don't have AIS but it might mean a double target for those that did unless you could filter double targets out.

(2023-10-25, 07:51 PM)barrymac Wrote: Nice! 

Here's a product that does the object recognition https://sea.ai/
Starting for the bargain price of 11k euro, plus about another grand for connectivity, up to a glorious 45k for for all bells and whistles. 

They are using NVIDIA® Jetson based platforms 

Perhaps it's possible to crowd source the training data via a signalK plugin? We could really annoy people with a training data captcha asking people to pick the squares with the floating container every log in :-) 

I have a GPU server that could be used for training runs, but I'm quite new to this stuff so I don't really know where to start. 

The coral has object recognition examples available though and the demo videos do look good enough that it would be useful.

Maybe the distance estimate isn't that hard, a camera autofocus type system could potentially give a fairly good estimate.

Hi Barry, Cracking idea! I like you have a GPU server - might need that down the line. I believe you can add one of these Edge accelerators to a Jetson too. I think Raymarine and Sea.ai might be unamused if we managed to get something that works for a couple of hundred dollars. Wink
Reply
#13
(2023-10-26, 12:38 AM)seandepagnier Wrote: Using cameras to see objects that they can alarm humans about is different from using cameras to enhance the autopilot.

Agreed, this thread has veered somewhat from the original intelligent autopilot subject and maybe a new thread regarding object detection is justified.
Reply
#14
(2023-10-26, 03:20 PM)barrymac Wrote:
(2023-10-26, 12:38 AM)seandepagnier Wrote: Using cameras to see objects that they can alarm humans about is different from using cameras to enhance the autopilot.

Agreed, this thread has veered somewhat from the original intelligent autopilot subject and maybe a new thread regarding object detection is justified.

In my previous post I suggested that you could use object detection to improve navigation. In other words instruct the autopilot to steer the vessel the correct side of a cardinal buoy for example. If however the system also detected a hazard up ahead surely it would be negligent not to raise an alarm?

I don't see that is incompatible with the aims of the original post but suggest its an enhancement however If you prefer I will set up an alternative thread to discuss navigation using object detection.
Reply
#15
(2023-10-26, 05:50 PM)Hillzzz Wrote:
(2023-10-26, 03:20 PM)barrymac Wrote:
(2023-10-26, 12:38 AM)seandepagnier Wrote: Using cameras to see objects that they can alarm humans about is different from using cameras to enhance the autopilot.

Agreed, this thread has veered somewhat from the original intelligent autopilot subject and maybe a new thread regarding object detection is justified.

In my previous post I suggested that you could use object detection to improve navigation. In other words instruct the autopilot to steer the vessel the correct side of a cardinal buoy for example. If however the system also detected a hazard up ahead surely it would be negligent not to raise an alarm?

I don't see that is incompatible with the aims of the original post but suggest its an enhancement however If you prefer I will set up an alternative thread to discuss navigation using object detection.

Well thread management issues aside, I really like the way fancy radars put targets on the chart plotter that look like AIS targets and have the same kind of functionality. It needs high quality gyro input to create a stable overlay on the chart. I think an object detection system could work well like this. 

So, with help from GPT-4, the following NMEA 2000 PGNs might be relevant:

"PGN 129808: Radar Data - This PGN is specifically used to communicate radar data. It includes information about radar targets, their position, speed, trajectory, and other related data.

PGN 129039: AIS Class A Position Report - Even though this PGN is typically used for AIS data, a similar format might be used to display radar-detected targets on a chart plotter if they're shown similarly to AIS targets.

PGN 129040: AIS Class B Position Report - Again, this is primarily for AIS, but if the system displays radar targets in a manner similar to AIS Class B vessels, this PGN might come into play.

PGN 129291: Set & Drift, Rapid Update - This could be used for information about the current set and drift, which could affect the trajectory calculations of the radar-detected objects."


Regarding the synthetic training data issue, I believe the practice works something like this. 

1. Collect a good set of real data from cameras installed in the field.
2. Manually label the objects using human brains and eyeballs
3. Use that data set as a seed for generating a great many more examples, which of course would be automatically labelled now. 
4. Introduce some randomness and visual adversity to make things more challenging for the target model, increase the sea state, add fog etc 

I believe there may be services available already that can do this, if not, then surely before long.
Reply
#16
1) Use learning networks to steer the boat as an alternative to basic filters.

2) Detect hazards to navigation and thus perform watch keeping ability. Surely a computer can do a better job of this than a human.

3) Literally steer around objects as well, but this is the most difficult to implement and also not all that useful compared to the above. In any case, #2 must be implemented before we have to worry about #3
Reply
#17
(2023-10-26, 08:39 PM)seandepagnier Wrote: I would like to suggest some detail on the first high level idea.

1)  Use learning networks to steer the boat as an alternative to basic filters.
  • More nuanced polar performance predictions based on currently observed conditions, learned from previously observed similar conditions
On this one, I'm thinking about Sail selection, wave height and direction, and the obvious fact that increasing wind speed doesn't translate to increasing speed after a point. 
  • Wave Pattern Recognition: Use sensors to recognize wave patterns and adjust boat movement accordingly.
  • Learned optimal helm response with respect to observed conditions, including wave direction and height
  • Current and Tide Optimization: Adjust route and speed based on tidal and current data.
  • Adaptive Tacking and Jibing: Use ML to determine optimal tacking/jibing points based on current and predicted conditions.
  • Dynamic Waypoint Adjustment: Adjust waypoints in real-time based on changing weather conditions and sea state.
  • True Wind Angle Optimization: Adjust sailing angle dynamically for optimal speed and safety.

2)  Detect hazards to navigation and thus perform watch keeping ability.  Surely a computer can do a better job of this than a human.

3)  Literally steer around objects as well, but this is the most difficult to implement and also not all that useful compared to the above.  In any case,  #2 must be implemented before we have to worry about #3
Reply
#18
(2023-10-26, 11:54 PM)barrymac Wrote:
(2023-10-26, 08:39 PM)seandepagnier Wrote: I would like to suggest some detail on the first high level idea.

1)  Use learning networks to steer the boat as an alternative to basic filters.
  • More nuanced polar performance predictions based on currently observed conditions, learned from previously observed similar conditions
On this one, I'm thinking about Sail selection, wave height and direction, and the obvious fact that increasing wind speed doesn't translate to increasing speed after a point. 
  • Wave Pattern Recognition: Use sensors to recognize wave patterns and adjust boat movement accordingly.
  • Learned optimal helm response with respect to observed conditions, including wave direction and height
  • Current and Tide Optimization: Adjust route and speed based on tidal and current data.
  • Adaptive Tacking and Jibing: Use ML to determine optimal tacking/jibing points based on current and predicted conditions.
  • Dynamic Waypoint Adjustment: Adjust waypoints in real-time based on changing weather conditions and sea state.
  • True Wind Angle Optimization: Adjust sailing angle dynamically for optimal speed and safety.

2)  Detect hazards to navigation and thus perform watch keeping ability.  Surely a computer can do a better job of this than a human.

3)  Literally steer around objects as well, but this is the most difficult to implement and also not all that useful compared to the above.  In any case,  #2 must be implemented before we have to worry about #3

Here are some of my ideas:

1. Add particle swarm optimization to auto tune autopilot gains

2. Add fractional order PID controller to autopilot FOPID with auto tuning 

https://www.mdpi.com/2076-3417/12/6/3139

3. For sea state detection and wave height and frequency, something like this
can be used: (Kalman filter with initial parameters from trochoidal wave model)
with Doppler effect formulas for moving boat

https://bareboat-necessities.github.io/m...-math.html

Simple approximation for wave height can be taken from
max acceleration and wave frequency in trochoidal model. 

Moreover vessel RAO (response amplitude operator) has to be taken into account. 
RAO is a complex number function of wave frequency and amplitude. 
RAO is like a dynamic fingerprint of a hull design and also load (like in cargo weight) distribution 
There will be a different RAO for each degree of freedom of the hull – the linear motions surge, sway, heave, and the orientation motions roll, pitch, and yaw.
If phase is not in consideration absolute value of complex value of RAO is used

4. Wave direction. How to properly detect it if it’s not aligned with wind?

https://www.mdpi.com/1424-8220/22/1/78/htm

5. Use Julia programming language instead of python. 

6. For object detection use yolov8 

https://github.com/ultralytics/ultralytics

YOLO (you only look once) uses CNN convolutional neural networks to process images fast

Fair winds!
Download BBN Marine OS for raspberry pi 

https://bareboat-necessities.github.io/m...at-os.html

Video of actual installation:

https://www.youtube.com/watch?v=3zMjUs2X3qU


Reply
#19
An open source library for vessel RAOs

https://github.com/RubendeBruin/mafredo

Ocean waves spectral data

https://github.com/wavespectra/wavespectra
Download BBN Marine OS for raspberry pi 

https://bareboat-necessities.github.io/m...at-os.html

Video of actual installation:

https://www.youtube.com/watch?v=3zMjUs2X3qU


Reply
#20
I would like to add my 2 cents worth.  I’ll give an Artificial Intelligence-101 summary so my opinions at the end will have some meat behind them and hopefully sound intuitive… versus just an opinion.  I'd say I'm only an intermediate sailor and would defer to almost anyone here as knowing more and hope to learn from you all.  My background is more technical and I like the idea of using AI and have worked with it for a while.  I feel I have a pretty good grasp of what it can and can't do... at least at the commodity level of hardware - Arduino, RasPi, PC's.  These MPU/CPU/GPUs can do AI the same as ChatGPT4... just at a far smaller scale.  The concepts are the same.

Nuts and Bolts of AI

I’ll start out with a 10,000 foot view.  I’m not sure how non-technical this can be made, but I’ll try.  Whereas I would have trouble with naming all the different lines on a boat... I just color code mine and tell someone to pull the green one.  Wink  Similarly, I'm guessing many bailed out of Mathematics  classes at the earliest opportunity.  

Inputs - If you read a little about AI, you'll read the term parameters.  This is just how many inputs a model is using to solve the problem at hand.  ChatGPT4 is said to have over a trillion parameters.  In our case, a parameter is simply our wind and boat speed, wind and boat direction, GPS data, wave data, … etc.  We wouldn't need near that many for our problem.  For a specific boat (our boat) we might get away with a few tens of parameters.  In a more general case of trying to apply to all of our boats, we would likely need hundreds. 

The Model – The model is the “brain” of AI.  This is where mathematics comes into play.  I don’t recall where I was first introduced to Matrices, but I read that it is commonly taught around Algebra II in high school these days.  The model is nothing more than one or more matrices.  Basic Linear Algebra is used to take the Inputs do multiplication and/or addition steps on these matrices.  The values in this matrix are simply numbers.  They are most often called weights and biases.  The best way to think of this is there are hundreds of Gains

Outputs – This is what spits out of the Model.  In the case of ChatGPT4, this might be a sonnet or a picture or any number of other things.  For us, in the simplest case, is where to position the rudder.  A little more complex might be sheeting of the sails.  I think I read where Sean was considering adding control line adjustment to PyPilot.

Two Phases of AI

Using the model is the easiest – Plug in all the input parameters, run the linear-algebra steps and out pops the answer.  For our case of say several tens of parameters and the output of only controlling the rudder, even the lowest Arduino could handle this problem at several hundred hertz.  IOW, an Arduino Nano could pilot the boat just fine once the model is defined.

Learning is where the Model is taught and eventually defined.  Teaching a model takes more memory and more time.  Whereas an Arduino can run our simplistic model, it would take a little more power to teach the model.  But, even an ESP8266 or ESP32 could probably do it and certainly any RasPi.  The model described as a matrix has many values in it.  The number of values in this matrix is well characterized based on the number of inputs and the number of outputs.  Once characterized, its size remains fixed – for those inputs, outputs and the goals being taught.  Like an infant brain, a starting Model is fundamentally blank.  The values are usually set to some random number between 0 and 1.  The inputs (from our sensors) are set and the Using calculations takes place to get the outputs.  Now… since the brain is ignorant, the output is basically garbage.  Learning involves taking the outputs and telling the model just how bad the results are.  Adjustments are made to the weights and/or biases and the Using calculations are run again.  Hopefully a model converges on a useful solution… Eventually!

Ways of Learning

I’ll describe two ways.  I’m sure there are many more and there are many sub-classes of each of these.  Both methods attempt to achieve the same thing. 

Back Propagation – This is the quicker and more definitive way.  Its main limitation is it is based on the teacher knowing the answer decisively.  It uses more linear-algebra, and calculus to make the adjustments to the Model in a well characterized way.  The theory can be distilled down into a program that is fairly small and strait forward.  As mentioned it could fit on an ESP8266 or ESP32 for our couple hundred parameter use case.  It would be pretty easy to set up the Learning program to accept all the sensor data that the PyPilot now takes as inputs.  In the evaluation phase, it would simply be compared to where the human helmsman is currently positioning the rudder.  If anyone is interested, the “Hello World” program for AI, Back Propagation determines what a hand-written digit is.  It uses a publicly available database of pre-written digits from hundreds of people.  Here is a series of YT that explains both the Using phase and the Back Propagation phase.  These get far more technical in describing the above paragraphs. https://www.youtube.com/playlist?list=PL...x_ZCJB-3pi  In this Hello World example the AI can get the answer right around 96% of the time. 

Genetic Algorithms are used to handle cases where the answer isn’t decisively known.  I’ve been more interested in exploring this case.  It is called Genetic because is attempts to model the learning procedure similar to Darwin Evolution principles.  In its simplest form, a generation of Models are generated.  For instance 80 models with all the values in the model are randomly chosen.  Each model is run through a series of tests to evaluate its fitness for survival.  The stronger ones mate by combining their values into children models.  This goes on for many generations and hopefully convergence is achieved.  Sometimes it works, sometimes not.  Here is (more or less) the founder’s book on the strategy.  It can generally be picked up for under $10 used - https://www.abebooks.com/9780201157673/G...157675/plp

I’ll give the example I was working on.  It uses a two-wheeled robot that can go forward and backward and turn.  Its inputs are some distance sensors around its perimeter and the speed of the two motors driving it.  The outputs are the new speeds of the motors to be used.  In this example there is no higher power like a human that knows the right answer of how fast and in which direction the motors should turn.  A goal is set and the robot is evaluated by this goal.  The goal for this robot is more fit if it goes faster and stays further away from obstructions.  It fails and dies if it runs into a wall.  It fails if it spins around and goes nowhere. 
Here is a YT of the robot in a simple irregular donut type arena when it is first starting to learn.  Think – A baby trying to learn to walk.
https://www.youtube.com/watch?v=cXlDSS_dojI

Here in later generations where some things are starting to come together.  It at least gets out of the first little region.  You’ll note in these, it has learned that it can turn and/or back up after hitting a wall.
https://www.youtube.com/watch?v=1FWgiEgE75s

Here is a fairly good candidate that can cycle through.
https://www.youtube.com/watch?v=5vjrMeTcAtw

And finally, transferring it to another arena where it wasn’t trained.  This shows that it can apply what it knows in unknown areas.
https://www.youtube.com/watch?v=L0flTXwEigg

Take Aways and Opinions

  1. None of this is 100% reliable.  Much of the Mathematics is based on statistics and probability.  We only run into a rock, 2% of the time.  Tongue
  2. Back Substitution can be easily implemented for known outputs
  3. Genetic Algorithms can solve for problems with outputs that are hard to quantify, but with goals for trends of outputs.

My Opinions

  1. Power Efficiency - Watching Sean’s videos of his PyPilot on a couple of totally different boats while they automatically change course at the waypoints and handle different sea states I am struck with how little the helm needs to move.  No matter what, an AI version will be far less energy efficient of electric power. 
  2. Learning – It should be pretty easy to create a Back Substitution version and install it on your OpenMarine RasPi.  It would start with the random values in the model (Infant Brain).  It could be completely passive and safe.  You would have to add the hardware for rudder position feedback.  It could run in the background and absorb all the incoming sensor data and take your responses to controlling the helm and start learning and modifying the Model. 
  3. Teacher – It would only be as good as the helmsman. It would learn your good habits and your bad habits.  It’ll learn your lazy, near sleepy habits.  It’ll take your mistakes as gospel and incorporate them into its brain.  I think if (and I might try) to add such an AI once I get a working PyPilot version first.  I could let PyPilot teach it to start with and then if/when I think I can do better, I might step in and let it learn from me.  Would it do as good as PyPilot in the general case?  Would it do as good as me (if I am better) in those special cases?  Sounds like a very interesting science project.
  4. Sea State – If you want to make it handle different sea states, it should be obvious that you need sensors that provide some kind of data that will change based on sea states.  This might be as simple barometric pressure sensor (~$1) to measure rise and fall of the boat.  It might be accelerometers and gyros to measure rocking and rolling of the vessel. 
  5. Simplicity – The beauty of AI is that no first principle type logic is necessary for someone to work out.  Just adding the sensors to the mix and adjusting the model to absorb their input will automatically start to learn about how you handle the boat in various sea states.
  6. Generalizing – I hope the above examples have given you some taste of what is involved.  I don’t believe the talk about making one common, centralized model available to everyone is feasible at any level.  Let me ask the real sailors here – Do you control your boat differently based on how it is loaded – Heavy, light?  How would you propose to tell the computer this state?  Do you believe that every 40’ monohull is the same?  To be able to quantify every parameter of boats so the learning can distinguish from one 40’ boat from another would be staggeringly complex.  Naval Architects have been trying to quantify boats since day-one.  Are there any parameters you believe you can look at in commonly available charts comparing boats (without seeing the boat) and would be able tell how to handle it merely by the numbers? I can not imagine I would ever voluntarily download and use a generic model.  The mere prospect of configuring it with my boat's parameters doesn't seem feasable.
  7. How to Sail – Do you want it to sail battery power efficiently?  How about smoothest ride?  Fastest?  Safest?
  8. Trust Issues - Would I ever trust it?  Probably only while I'm watching it.  I'm more likely to trust PyPilot.  The thing about AI you don't know what edge case might crop up that never cropped up before.  Would it spike the helm hard over because of some freak combination of input parameters.  
  9. Racer – The only use case I can see where an AI auto pilot might do better than say PyPilot is a racer.  I recently read an article about a long distance race where there was a professional helmsman and various others.  I’m picturing a Pro/Am type event.  The racer stated that when off shift, down in the boat without being able to see the sea state would cringe at the helmsmen on shift's responses.  He could feel the state of the boat and knew exactly how he would have controlled it… and that the other did not do it that way.  He could feel this in the bunk!  I think a Back Propagation AI dedicated with this one race boat and taught only when the racer was on-helm (and rested) could approach his abilities and certainly be better than the off-watch helmsmen.  I believe this would be great for any Vendée Globe type boat/contestant.  I however think that the AI being trained on one boat/driver could not be applied to any other boat (no matter how close they are similar).  I would imagine at that level, two identical boats would probably be setup differently based on their human. 

I'm curious if there is anyone else on the forum that has worked with AI algorithms and has a different opinion.  I believe AI can do some staggeringly complex things for good or ill.  It just depends on how much effort and computer power you're willing to throw at the problem.  If I could do it with a RasPi, I might experiement.  If it requires a AMD 7950X with a Nvidia 4090 using about a kilowatt of power... I think I'll pass.   Huh
Reply


Forum Jump:


Users browsing this thread: 2 Guest(s)