Step 3:  

Now train a network using the program you just wrote (BackpropCE) with some input data.  You can just copy the TestBackpropXOR from project 4, question 2, step 6.  Call this new program TestBackpropCE.

The following is just a copy and paste of TestBackpropXOR but save it under the name TestBackpropCE


clear;
clc;
close all;

 X=[0 0       %input matrix
        0 1
        1 0
        1 1];
 
  D=[0        %correct output matrix
        1
        1
        0];

W1=rand(4,2); %assign random weights to connections between input and hidden layers
W2=rand(1,4); %assign random weights to connections between hidden and output layers

for epoch=1:10000  %start training the network

        clc;  %these additional  lines will just allow you to see the progress of the training in the command windown when you run the program
        disp(['Epoch # ' num2str(epoch)]);
        [W1 W2] = BackpropXOR(W1,W2,X,D);
        W1 %print updated values of W1 to screen
        W2 %print updated values of W2 to screen

end; %for epoch

%*********************************************************************************
    %The network is now trained. From here, we're just checking to see if the network is
    %adequetly trained by giving it the input values to see what output  values we get
   
    N=4; %number of input trials (number of rows in X)
   
    for k=1:N
        x=X(k,:)';  %note the transpose symbol '
        v1=W1*x; %get the hidden layer node values
        y1=1./(1+exp(-v1)); %apply the activation function to get the output of the hidden layer
        v=W2*y1; %get the output layer node value
        y(k)=1./(1+exp(-v)); %get the final output of the network.
        %Note that y(k) saves the values of the network output for each k so we can print it below    
    end; %for k=1:N
   
    y'   %print values of the output node (y) to the screen for each of the four trials
   
    %you should end up with 4 numbers because N=4.  Compare these four
    %numbers to the values in the vector D which provides the correct answers.
    %They should be very similar if the network was trained properly.