Let me use the book notation to avoid confusion. You have two points

and

(which you called a1 and a2) and their target outputs (which you called assignment) are

and

.
Either point, call it just

for simplicity, is a vector that has

components

. Notice that bold

denotes a full data point, while italic

denotes a component in that data point. We add a constant 1 component to each data point

and call the component

to simplify the expression for the perceptron. If the weight vector of the perceptron is

(where

takes care of the threshold value of that perceptron), then the perceptron implements
where

returns

if its argument is positive and returns

if its argument is negative.
Example: Say the first data point

(two dimensional, so

). Add the constant component

and you have

. Therefore, the percepton's output on this point is

. If this formula returns

which is different from the target output

, the PLA adjusts the values of the weights trying to make the perceptron output agree with the target output for this point

. It uses the specific PLA update rule to achieve that.