Step 3:  

clear;
clc;
close all;

 X=[0 0       %input matrix
        0 1
        1 0
        1 1];
 
  D=[0        %correct output matrix
        1
        1
        0];

X=X';
D=D';

net = feedforwardnet(10);  %  Create the network

%The next few lines serve one purpose: to force the training to go through enough epochs to properly train the network (we don't want it to stop training too early). We don't usually need these extra steps but because we have so few training trials (only 4) it's very easy for the network to get stuck in a local minimum instead of the global minimum during training. So we force it to go through all 1000 epochs of training.
 
net.trainparam.goal =0;      % This is our ideal goal, which means that we'd like the error to be zero (this may not happen, but it's a goal).
net.divideparam.valratio = 0;   %This means don't set aside part of the trianing data for "validation". The data set is already too small (only 4 trials, or rows, of training).
net.divideparam.trainratio =1;  %This means use all the training data (the four rows) for training
net.divideparam.testratio =0;   %This means don't set aside any of the training data for an independent test 
net.trainparam.min_grad = 1e-100;  %This means that stop training if the slope of the error gradient is near zero (you've almost reached the bottom of the error mountain so you're at the flat part of the path (near the minimum of the error function).  
net.trainparam.showwindow=0;  % Don't show the training window when running the program


net = train(net,X,D)     % Train the network using the data in X and D

Your network is now trained.  In the next step, we will test the network to see if it was properly trained.