HƯỚNG DẪN DEEP LEARNING (DL) with JavaFX Example

Joe

Thành viên VIP
21/1/13
2,935
1,304
113
Hi


(source: Wikipedia.org)

The Image shows you how a machine "deep-learns". First some common features, then more specific features and finally the unique features of an object: an elephant. Simple and easy, isn't it? Well, for a human -even for a child- the recognition of the common, then the specific and finally the unique is usually lightning fast: either one knows it or one doesn't know it. The human deep learning is relatively simple: you tell the child "it's an adult elephant." Later the child sees a baby elephant it immediately "recognizes" that the animal is an elephant.

For a machine it isn't so. It's a Hercules work. The only reason: machine is utterly stupid. Unbelievably blockheaded. If you want to teach a machine the Deep Learning you must be
  1. genius AND
  2. as patient as an angel
otherwise you shouldn't start to teach a machine the Deep Learning because you could be finally either so exhausted or so in the self-doubt that you could commit suicide.

Is it really so difficult? Well, again that Machine is obviously blockheaded, but it is also very avid for learning. It could learn years-long without giving-up or getting tired. As said, you must be as patient as an angel and if see the work as simple as possible you could see Deep Learning is a very simple task and not a Hercules work. The famous Russian inventor coined the phrase "Things that are complex are not useful. Things that are useful are simple." Yes, if you see the things as simple things you could see the usefulness inside them.

Let try to see the things as simple as possible. We start with us as a human. A human is an entity that was born without any surviving instinct like the animals. Neither a human newborn can walk, nor crawl, nor swim, nor fly. Just a little piece of loud-crying object. BUT: we human are equipped with 5 exceptional senses and a powerful computer -the brain.

A machine is equipped with nothing. Artificial means man-made in plain English. If we want to teach a machine the Deep Learning we have to mimic for the machine all the things we human have. We have a brain. We give it a "processor". We have 5 senses. Hearing, seeing, smelling, feeling and tasting. We give it the necessary "senses". And that is.


(source: cdn.mos.cms.futurecdn.net)

5 Senses? Yes, a machine that should be taught with Deep Learning needs such senses, too. And the senses are the man-made things named sensors. The artificial ears, eyes, nose, mouth and skin. For example: an autonomous driving a car (machine) needs
  • An ultrasound sensor (ears) to locate the all around (like a bat on flying with its ultrasound.)
  • A camera (eyes) to see the traffic signs
  • A filter (nose) to reduce the ozone with its sensor
  • A processor (with an OS) that processes the events feedback by the mentioned "sensors".
But a car does NOT automatically know how to drive "autonomously". We have to teach it: Deep Learning. We have to teach the machine? Yes, who else? Then, what is about UNSUPERVISED MACHINE LEARNING? Well, it's just a word that one should not literally believe in its (unsupervised) autonomy. The reason is that ML relies on a specific algorithm which is "tailored" to a specific field. Outside that field the "unsupervised" ML is as dumb as a heap metal. Similar to this deliberation Deep Learning depends on that what man has taught it to learn. In other words: DL is a tailored Learning. It's the unsupervised ML that self-learns on that what it has been taught: to improvise and to acquire a new knowledge (derived from a known or similar knowledge.)

The senses are the human interfaces to the outside world. With them human learns and acquires new knowledge. Deep Learning (for a machine) depends on the artificial senses, too. The sensors. And that is the gist, the principle point: the smarter a sensor is, the deeper a machine can learn. The sensor could be any hardware (camera, ultrasonic sender/receiver, etc.) OR a swift software that performs a specific algorithm. For example, a navigation system software calculates the current position using the signals from 3 Global Positioning System satellites or GPS.


(source: Wikipedia.org)

The precision of a navigation system depends on the calculation algorithm and the receiving GPS signals. A "deep-learned" car can only learn what it has as the bases: the calculation algorithm and the outside signals. If the signals deviate due to whatever reason the car goes to a wrong place. Deep Learning is here helpless for a car.

You may ask what's about the SIXTH sense? This sense cannot be made by man. The emotion is so individual and so specific-unique that it is impossible, infeasible to build an emotional sensor. You may protest that the face recognition is so far that it could guess what the face is saying about the innermost of the face owner. Sobbing could be 99% sadness, but at least 1% happiness is in there. There are people who sob because they are so happy. And that is the unsolvable problem of Deep Learning and the limit of human ability to build an emotional sensor. Further, the combination of different disciplines delivered by the sensors is still a big headache for every Deep Learning scientist. Therefore a blanket assertion that one day AI-ML-DL will replace or surpass human is a hollow, obtuse and ignorant assertion.

So much about Deep Learning. If you have more interest in this field you could google for tons of scientific articles about Deep Learning.

As I said, if you see the things simple they become simple. Don't complicate the things with hard-to-understand mathematical or probabilistic formulas if you are not adept enough in such fields. I have showed and led you how to build an autonomous Pick-up/Delivery Chopper with Fuzzy Logic and the simple AI Pythagorean Calculation in JavaFX. Today I will show and lead you through the process of Deep Learning with an autonomous Chopper: avoiding obstacles.

(Next: FLDrone development)
 
Sửa lần cuối:

Joe

Thành viên VIP
21/1/13
2,935
1,304
113
(cont. FLDrone development)

FLDrone with Deep Learning
As an example for our Deep Learning or, if you so will, Unsupervised Machine Learning we take the FLDrone with PICK-UP/DELIVERY feature from the blog "Artificial Intelligence - Machine Learning - Fuzzy Logic with JavaFX" (see the last session. Click HERE) and extend it to a more intelligent Chopper with Deep Learning.

FLDrone_5.png
(Image 1)

You have certainly learned from the above mentioned article that AI-ML is unthinkable without Fuzzy Logic, or a least it is quite difficult to implement. Fuzzy Logic with its special form of floating point data (Pyramid or Trapezoid) and Variables (min. 1 FuzzyData) is useful as a knowledge cache. An accumulation of knowledge caches makes a knowledge repository. Otherwise we need a Database that provides such special means to cache the knowledge. And I have shown you how FL works: with only one FL script we can extend the only PICK-UP Chopper to the PICK-UP/DELIVERY chopper without having to modify the FL script. The only change was the modifications of the peripheries: the methods computeDelta() and draw().

Deep Learning or Unsupervised Machine Learning requires a bit more knowledge if we want to let the Chopper to steer itself how to deviate or to circumvent the obstacles. First of all we have to extend the knowledge repository: the FL script.

1) The chopper additionally needs some more FuzzyData and two more FuzzyVariables:
Java:
<!Obstacle[inFront(-100,-80,-20,-1),hit(-1,0,1),ahead(1,20,80,100)]!>
<!Maneuver[right(20,30,40),left(-20, -15, -10)]!>
Obstacle and Maneuver are two FuzzyVariables, while inFront, ahead, hit, right and left are the FuzzyData. The specified data (zones) are Trapezoid and Pyramid (hit). The values are doubles. Because the diameter of an obstacle is 40 (unit) and the chopper size (i.e. image size) is a rectangle 23x35 FuzzyData are then between 80 to 20 (rounded). 100 is the so-called safety distance. To maneuver the chopper has to sway between 20 (left) and 40 (right due to the chopper tail).

2) And some new knowledge
Java:
  // check to obstacle
  Obstacle = this.checkObstacle()
  if (Obstacle is inFront) then Maneuver is right
  if (Obstacle is ahead) then Maneuver is left
  this:obstacle = Maneuver
The SENSOR this.checkObstacle() gives back a (double) value which is 0 (no obstacle) or the distance to the obstacle. Depend on the Obstacle sample FL decides to maneuver to the left or to the right. The defuzzified value of FV Maneuver is then fed back to the Chopper Controller (here JavaFX app) as this: obstacle = Maneuver.

With the FL decision Maneuver the controller is able to compute the new position for the next Chopper move and feeds the results (deltaX and deltaY) back to FL as the SENSOR this.computeDelta(). Up this step the Chopper works with its existed knowledge which is already known in the previous version PICK-UP/DELIVERY chopper.

How the FLDrone works with Deep Learning?
After having set the cargo position (PICK-UP) or the location (DELIVERY) an obstacle can be put anywhere and anytime by a mouse click on the canvas so that the chopper should bump into it on its flying way to the target (cargo or location), or on the way home. And more than once, you can put the obstacle anytime and anywhere again so that the chopper has to maneuver around the obstacle. The way how the chopper decides to circumvent the obstacle is usually different. And that is how Deep Learning knowledge works: first some common traits, then more specific features and finally the uniqueness: the "how near" for a decisional safe deviation.

The modified script FLDrone.txt
Java:
// FuzzyVariables and their corresponding FuzzyData and FuzzyParm
<!Distance[tooFarAhead(-620,-520,-420,-320), veryFarAhead(-330,-230,-130,-30), farAhead(-130,-14,-10,1),target(-1,0,1), farBehind(1,10,14,130),veryFarBehind(30,130,230,330),tooFarBehind(320,420,520,620)]!>
//
<!Height[tooHigh(-620,-520,-420,-320),veryHigh(-330,-230,-130,-50),high(-130,-24,-16,1),goal(-1,0,1),low(1,16,24,130),veryLow(50,130,230,330), tooLow(320,420,520,620)]!>
//
<!Obstacle[inFront(-100,-80,-20,-1),hit(-1,0,1),ahead(1,20,80,100)]!>
<!Maneuver[right(20,30,40),left(-20, -15, -10)]!>
//
<!Speed[speedBACK(-3,-2.5,-2),slowBACK(-2,-1.5,-1),back(-1,-0.5,-0.1),standing(-0.1,0,0.1),fore(0.1,0.5,1),slowFORE(1,1.5,2),speedFORE(2,2.5,3)]!>
//
double x = 0
//
while x < 3 && x >= 0
  // check to obstacle
  Obstacle = this.checkObstacle()
  if (Obstacle is inFront) then Maneuver is right
  if (Obstacle is ahead) then Maneuver is left
  this:obstacle = Maneuver
  //
  this.computeDelta()
  Height = this:deltaY
  Distance = this:deltaX
    //-------------------------------------AHEAD------------------------------------------------------
  if ((Distance is tooFarAhead || Distance is veryFarAhead) && Height is tooHigh) then Speed is speedFORE
  if ((Distance is tooFarAhead || Distance is veryFarAhead) && Height is veryHigh) then Speed is slowFORE
  if ((Distance is tooFarAhead || Distance is veryFarAhead) && Height is high) then Speed is slowFORE
  //
  if ((Distance is tooFarAhead || Distance is veryFarAhead) && Height is tooLow) then Speed is speedFORE
  if ((Distance is tooFarAhead || Distance is veryFarAhead) && Height is veryLow) then Speed is slowFORE
  if ((Distance is tooFarAhead || Distance is veryFarAhead) && Height is low) then Speed is slowFORE
  //
  if (Distance is farAhead && Height is tooHigh) then Speed is speedFORE
  if (Distance is farAhead && Height is veryHigh) then Speed is slowFORE
  if (Distance is farAhead && Height is high) then Speed is fore
  //
  if (Distance is farAhead && Height is tooLow) then Speed is slowFORE
  if (Distance is farAhead && Height is veryLow) then Speed is slowFORE
  if (Distance is farAhead && Height is low) then Speed is fore
  //------------------------------BEHIND-------------------------------------------------------------
  if ((Distance is tooFarBehind || Distance is veryFarBehind) && Height is tooHigh) then Speed is speedBACK
  if ((Distance is tooFarBehind || Distance is veryFarBehind) && Height is veryHigh) then Speed is slowBACK
  if ((Distance is tooFarBehind || Distance is veryFarBehind) && Height is high) then Speed is slowBACK
  //
  if ((Distance is tooFarBehind || Distance is veryFarBehind) && Height is tooLow) then Speed is speedBACK
  if ((Distance is tooFarBehind || Distance is veryFarBehind) && Height is veryLow) then Speed is slowBACK
  if ((Distance is tooFarBehind || Distance is veryFarBehind) && Height is low) then Speed is slowBACK
  //
  if (Distance is farBehind && Height is tooHigh) then Speed is speedBACK
  if (Distance is farBehind && Height is veryHigh) then Speed is slowBACK
  if (Distance is farBehind && Height is high) then Speed is back
  //
  if (Distance is farBehind && Height is tooLow) then Speed is speedBACK
  if (Distance is farBehind && Height is veryLow) then Speed is slowBACK
  if (Distance is farBehind && Height is low) then Speed is back
  //----------------------------AT TARGET------------------------------------------------------------
  if (Distance is target && (Height is tooHigh || Height is veryHigh)) then Speed is slowFORE
  if (Distance is target && Height is high) then Speed is fore
  //
  if (Distance is target && (Height is tooLow || Height is veryLow)) then Speed is slowBACK
  if (Distance is target && Height is low) then Speed is back
  // ------------------------------------------------------------------------------------------------
  if (Height is goal && (Distance is tooFarBehind || Distance is veryFarBehind)) then Speed is slowFORE
  if (Height is goal && Distance is farBehind) then Speed is fore
  if (Height is goal && (Distance is tooFarAhead || Distance is veryFarAhead)) then Speed is slowBACK
  if (Height is goal && Distance is farAhead) then Speed is back
  // ------------------------------------------------------------------------------------------------
  if (Distance is target && Height is goal) then Speed is standing
  //
  this:speed = Speed
  x = this:home
endwhile
FLDrone_6.png

FLDrone_7.png

That is the way HOW Deep Learning works. This kind of Deep Learning is widely applied in the military Drones, Cruise Missiles, etc.

The chopper.zip contains FLDrone.java, script/FLDrone.txt and the FuzzyLogic package fuzzylogic.jar
Joe
 

Attachments

Sửa lần cuối:

Joe

Thành viên VIP
21/1/13
2,935
1,304
113
PS.
If anyone has more interest on Java FuzzyLogic -beside my own FL implementation- (s)he can take a look at this beautiful, complex and very functional (with JChart) jFuzzyLogic of SourceForge. Click HERE to download the sources and examples. And if you do that you could run the "demo" example on an CMD window as following and you get an animation together with a colorful chart.
Code:
C:\JFX\Fuzzy\FL_SourceDorge>java -jar jFuzzyLogic.jar demo
jFuzzyLogic version JFuzzyLogic 3.3 (build 2015-04-09), by Pablo Cingolani.

Running demo
Start: TipperAnimation
FUNCTION_BLOCK tipper

VAR_INPUT
        food : REAL;
        service : REAL;
END_VAR

VAR_OUTPUT
        tip : REAL;
END_VAR

FUZZIFY food
        TERM delicious :=  (7.0, 0.0) (9.0, 1.0) (10.0, 1.0) ;
        TERM rancid :=  (0.0, 1.0) (1.0, 1.0) (3.0, 0.0) ;
END_FUZZIFY
....
Tip_Demo.png

Compared to my Implementation SourceForge's jFuzzyLogic has the size of an elephant, very complex and mine of a mouse, very simple:
6 JAVA APIs:
  1. FuzzyData
  2. FuzzyVariable
  3. FuzzyFrame
  4. FuzzyEngine
  5. FuzzyCluster
  6. FuzzyIOs
 
Sửa lần cuối: