#1




*ANSWER* questions w linear regression & weight decay
I have been running the weight decay examples Q26, but haven't seen any real improvement in the outofsample error compared to no regularization at all. Is that just a feature of this particular problem, or should I recheck my calculations?
Unfortunately (or not), the answers I've been getting do appear as options on the multiple choices. 
#2




Re: *ANSWER* questions w linear regression & weight decay

#3




Re: *ANSWER* questions w linear regression & weight decay

#4




Re: *ANSWER* questions w linear regression & weight decay
Quote:
And yes, I have been using classification error, but that is a good point  I started using the regression residuals and such, but that mistake at least I caught. 
#5




Re: *ANSWER* questions w linear regression & weight decay
As I suspected, all my answers on these were wrong. Does anyone have code (R if possible) to show, that I could use for comparison? I'm suspecting my problem was something dumb; even the original linear regression was wrong, and I compared that one with the same answer from the R lm() function.
I'm especially concerned since HW 7 uses all the same data again  so I really need to track this down. 
#6




Re: *ANSWER* questions w linear regression & weight decay
Quote:

#7




Re: *ANSWER* questions w linear regression & weight decay
Michael:
Here you go: Code:
#READ IN THE FILES. datos1 < read.table("in.dta") names(datos1) < c("X1","X2","Y") datos2 < read.table("out.dta") names(datos2) < c("X1","X2","Y") #FOR THE FOLLOWING QUESTIONS, SET UP THE MATRIXES Z < with(datos1, cbind(rep(1,nrow(datos1)),X1,X2, X1^2,X2^2,X1*X2,abs(X1X2),abs(X1+X2)) ) Z < as.matrix(Z) Zout < with(datos2, cbind(rep(1,nrow(datos2)),X1,X2, X1^2,X2^2,X1*X2,abs(X1X2),abs(X1+X2)) ) Zout < as.matrix(Zout) #NOW FIT WITH WEIGHT DECAY USING LAMBDA=10^3 lambda < 10^(3) M < t(Z)%*%Z + diag(rep(8,1))*lambda w < solve(M)%*%t(Z)%*%datos1$Y Ym < as.numeric(sign(Z%*%w)) Ein < mean(datos1$Y!=Ym) Ym < as.numeric(sign(Zout%*%w)) Eout < mean(datos2$Y!=Ym) 
#8




Re: *ANSWER* questions w linear regression & weight decay
Thanks!
Yes, Elroch, I used the full range of lambda. I think my mistake is elsewhere. 
#9




Re: *ANSWER* questions w linear regression & weight decay
Quote:
ok, I'm going to expose most of my insult to the art of programming for these questions. Don't use it as a style guide (especially that nasty bit of unvectorised code. Also I suspect the as.matrix's may be superfluous.) The data format should be clear, I hope. Code:
WeightDecayLinearRegressionSolver < function(inputs, outputs, lambda) { # note inputs have bias coordinate # inputs is a matrix of 2d points (with a bias) # outputs is a vector providing a real valued function of those points if (isTRUE(all.equal(var(outputs), 0))) { # This is the completely degenerate case, which occurs when trying to classify data of a single class result < c(outputs[1], 0, 0) } else { result < PseudoInverse(t(as.matrix(inputs)) %*% as.matrix(inputs) + diag(rep(lambda, length(inputs[1,]))) ) %*% t(as.matrix(inputs)) %*% outputs } result } PseudoInverse < function(mat) { tmat < t(as.matrix(mat)) inv(tmat %*% as.matrix(mat)) %*% tmat } ClassificationError < function(actual, predicted) { result = 0 for(i in 1:length(actual)) { if(abs(actual[i]  predicted[i]) > 0.5) { result < result + 1 } } result/length(actual) } 
#10




Re: *ANSWER* questions w linear regression & weight decay
I am really stuck starting with problem 2 on homework 6. I want to find out where I went wrong before I start on homework 7, since I got 3/10 on homework 6. Is there anybody here who reads Clojure who can tell me where I went wrong?
Code:
(ns hw6.core (:require [clojure.java.io :as io] [clatrix.core :as m] )) (defn pseudoinverse [M] (m/* (m/i (m/* (m/t M) M)) (m/t M))) (defn readdataset [url] (m/matrix (withopen [r (io/reader url)] (doall (map (comp (partial map readstring) (partial reseq #"\S+")) (lineseq r)))))) (defn augmentdataset [M] (let [[x1s x2s] (m/cols M) [n] (m/size M)] (m/hstack (m/ones n 1) x1s x2s (m/mult x1s x1s) (m/mult x2s x2s) (m/mult x1s x2s) (m/abs (m/ x1s x2s)) (m/abs (m/+ x1s x2s))))) (defn ys [M] (let [[_ _ ys] (m/cols M)] ys)) (defn readinsample [] (readdataset "http://work.caltech.edu/data/in.dta")) (defn readoutofsample [] (readdataset "http://work.caltech.edu/data/out.dta")) (defn readsetup [] (let [in (readinsample) out (readoutofsample)] { :xins (augmentdataset in) :xouts (augmentdataset out) :yins (ys in) :youts (ys out) } )) (defn weights [probset] (m/* (pseudoinverse (:xins probset)) (:yins probset))) (defn ein [probset] (let [thediff (m/ (m/* (:xins probset) (weights probset)) (:yins probset)) matches (count (filter (partial > 0.5) (m/mult thediff thediff))) n (count (m/rows thediff))] (/ ( n matches) n))) (defn eout [probset] (let [thediff (m/ (m/* (:xouts probset) (weights probset)) (:youts probset)) matches (count (filter (partial > 0.5) (m/mult thediff thediff))) n (count (m/rows thediff))] (/ ( n matches) n))) (defn problem62eval [x y] (+ (* ( 3/35 x) ( 3/35 x)) (* ( 21/125 y) ( 21/125 y)))) ;; hw6.core> (seq (weights (readsetup))) (1.6470670613492875 0.14505926927976592 0.10154120500179364 2.032968443227123 1.8280437313439264 2.4815294496056963 4.158938609024668 0.31651714084678323) hw6.core> (ein (readsetup)) 3/35 hw6.core> (eout (readsetup)) 21/125 hw6.core> (problem62eval 0.03, 0.08) 0.010848081632653063 hw6.core> (problem62eval 0.03, 0.10) 0.007728081632653061 hw6.core> (problem62eval 0.04, 0.09) 0.008173795918367349 hw6.core> (problem62eval 0.04, 0.11) 0.005453795918367348 hw6.core> (problem62eval 0.05, 0.10) 0.005899510204081633 hw6.core> Many many thanks in advance to whomever can straighten me out!! 
Thread Tools  
Display Modes  

