 LFD Book Forum *answer* question 13 issue
 User Name Remember Me? Password
 Register FAQ Calendar Mark Forums Read

 Thread Tools Display Modes
#1
 Charles Junior Member Join Date: Feb 2018 Posts: 1 *answer* question 13 issue

Hi,

I am struggling to get the right answer for question 13. I have the algorithm working (in Python). but it's just not good enough to get perfect accuracy (or at least <5%), as every time a few points are not classified correctly. My SVM implementation (copied below) has been working well for other exercises, and I think I added the rbf kernel correctly. Is anyone else having the same issue? I can't figure out what's wrong.

Below is the standalone implementation:
Code:
```import numpy as np
import math
import cvxopt

class svm():
''' Model: support vector machines '''
''' Error measure: classification error '''
''' Learning algorithm: support vector machines (linearly separable data) '''

def fit(self, X, y, kernel = 'linear', degree = 2, gamma = 1.):
''' returns the alphas '''
dimension = X.shape
N = X.shape
K = np.zeros(shape = (N,N))
# Computing the inner products (or kernels) for each pair of vectors
if kernel == 'linear':
for i in range(N):
for j in range(N):
K[i,j] = np.dot(X[i], X[j].T)
elif kernel == 'poly':
for i in range(N):
for j in range(N):
K[i,j] = np.square(1 + np.dot(X[i], X[j].T))
elif kernel == 'rbf':
for i in range(N):
for j in range(N):
K[i,j] = np.exp(-gamma * np.linalg.norm(X[i]-X[j]) **2)

# Generating all the matrices and vectors
P = cvxopt.matrix(np.outer(y,y) * K, tc='d')
q = -1. * cvxopt.matrix(np.ones(N), tc='d')
G = cvxopt.matrix(np.eye(N) * -1, tc='d')
h = cvxopt.matrix(np.zeros(N), tc='d')
A = cvxopt.matrix(y, (1,N), tc='d')
b = cvxopt.matrix(0.0, tc='d')

solution = cvxopt.solvers.qp(P, q, G, h, A, b)

a = np.ravel(solution['x'])
# Create a boolean list of non-zero alphas
ssv = a > 1e-5
# Select the corresponding alphas a, support vectors sv and class labels sv_y
a_small = a[ssv] # alphas
sv = X[ssv] # support vectors (Xs)
sv_y = y[ssv] # support vectors (ys)

# Computing the weights w_svm
w_svm = np.zeros((1,dimension))

for each in range(0,len(a_small)):
w_svm += np.reshape(a_small[each] * sv_y[each] * sv[each], (1,dimension))

# Computing the intercept b_svm
b_svm = sv_y - np.dot(w_svm, sv.T)
# does not matter if divide by sv_y or not

g = np.sign( np.inner(w_svm,X) + b_svm )

self.a = a
self.a_small = a_small
self.sv = sv
self.sv_y = sv_y
self.w = w_svm
self.b = b_svm
self.g = g
return self

def predict(self, X):
''' returns the g as a column vector '''
self.g = np.sign( np.inner(self.w, X) + self.b )
return self.g

N = 100
gamma = 1.5
run = 100

Ein = []
sep = []
for r in range(0,run):
X = np.random.uniform(-1, 1,size = (N,2))
y = np.sign(X[:,1] - X[:,0] + .25 * np.sin(math.pi * X[:,0]))
X = np.insert(X, 0, 1, axis=1)

svm_RBF = svm()
svm_RBF.fit(X, y, kernel = 'rbf', gamma = 1.5)
result = svm_RBF.predict(X)

Ein.append(1 - np.average(np.equal(result, y)))

sep = np.equal(Ein, 0.)
np.average(sep)```

 Tags python, q13, rbf, svm Thread Tools Show Printable Version Email this Page Display Modes Linear Mode Switch to Hybrid Mode Switch to Threaded Mode Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 04:54 PM.

 Contact Us - LFD Book - Top

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd. The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.