吴恩达机器学习练习:SVM支持向量机(不存在学不会!)

这篇文章主要为我们带来了吴恩达机器学习的一个练习:SVM支持向量机,通过本次练习相信你能对机器学习深入更进一步,需要的朋友可以参考下!

【关注博主,持续分享编程干货~】

1 Support Vector Machines

1.1 Example Dataset 1

  1. | %matplotlib inline
  2. | import numpy as np
  3. | import pandas as pd
  4. | import matplotlib.pyplot as plt
  5. | import seaborn as sb
  6. | from scipy.io import loadmat
  7. | from sklearn import svm


大多数SVM的库会自动帮你添加额外的特征X₀以及θ₀,所以无需手动添加

  1. | mat = loadmat('./data/ex6data1.mat')
  2. | print(mat.keys())
  3. | # dict_keys(['__header__', '__version__', '__globals__', 'X', 'y'])
  4. | X = mat['X']
  5. | y = mat['y']


  1. | def plotData(X, y):
  2. | plt.figure(figsize=(8,5))
  3. | plt.scatter(X[:,0], X[:,1], c=y.flatten(), cmap='rainbow')
  4. | plt.xlabel('X1')
  5. | plt.ylabel('X2')
  6. | plt.legend()
  7. | plotData(X, y)

  1. | def plotBoundary(clf, X):
  2. | '''plot decision bondary'''
  3. | x_min, x_max = X[:,0].min()*1.2, X[:,0].max()*1.1
  4. | y_min, y_max = X[:,1].min()*1.1,X[:,1].max()*1.1
  5. | xx, yy = np.meshgrid(np.linspace(x_min, x_max, 500),
  6. | np.linspace(y_min, y_max, 500))
  7. | Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
  8. | Z = Z.reshape(xx.shape)
  9. | plt.contour(xx, yy, Z)


  1. | models = [svm.SVC(C, kernel='linear') for C in [1, 100]]
  2. | clfs = [model.fit(X, y.ravel()) for model in models]


  1. | title = ['SVM Decision Boundary with C = {} (Example Dataset 1'.format(C) for C in [1, | 100]]
  2. | for model,title in zip(clfs,title):
  3. | plt.figure(figsize=(8,5))
  4. | plotData(X, y)
  5. | plotBoundary(model, X)
  6. | plt.title(title)

可以从上图看到,当C比较小时模型对误分类的惩罚增大,比较严格,误分类少,间隔比较狭窄。

当C比较大时模型对误分类的惩罚增大,比较宽松,允许一定的误分类存在,间隔较大。


1.2 SVM with Gaussian Kernels

这部分,使用SVM做非线性分类。我们将使用高斯核函数。

为了用SVM找出一个非线性的决策边界,我们首先要实现高斯核函数。我可以把高斯核函数想象成一个相似度函数,用来测量一对样本的距离,(x ⁽ ʲ ⁾,y ⁽ ⁱ ⁾)

这里我们用sklearn自带的svm中的核函数即可。


1.2.1 Gaussian Kernel

  1. | def gaussKernel(x1, x2, sigma):
  2. | return np.exp(- ((x1 - x2) ** 2).sum() / (2 * sigma ** 2))
  3. | gaussKernel(np.array([1, 2, 1]),np.array([0, 4, -1]), 2.) # 0.32465246735834974


1.2.2 Example Dataset 2

  1. | mat = loadmat('./data/ex6data2.mat')
  2. | X2 = mat['X']
  3. | y2 = mat['y']
  4. | plotData(X2, y2)

  1. | sigma = 0.1
  2. | gamma = np.power(sigma,-2.)/2
  3. | clf = svm.SVC(C=1, kernel='rbf', gamma=gamma)
  4. | modle = clf.fit(X2, y2.flatten())
  5. | plotData(X2, y2)
  6. | plotBoundary(modle, X2)


1.2.3 Example Dataset 3

  1. | mat3 = loadmat('data/ex6data3.mat')
  2. | X3, y3 = mat3['X'], mat3['y']
  3. | Xval, yval = mat3['Xval'], mat3['yval']
  4. | plotData(X3, y3)

  1. | Cvalues = (0.01, 0.03, 0.1, 0.3, 1., 3., 10., 30.)
  2. | sigmavalues = Cvalues
  3. | best_pair, best_score = (0, 0), 0
  4. | for C in Cvalues:
  5. | for sigma in sigmavalues:
  6. | gamma = np.power(sigma,-2.)/2
  7. | model = svm.SVC(C=C,kernel='rbf',gamma=gamma)
  8. | model.fit(X3, y3.flatten())
  9. | this_score = model.score(Xval, yval)
  10. | if this_score > best_score:
  11. | best_score = this_score
  12. | best_pair = (C, sigma)
  13. | print('best_pair={}, best_score={}'.format(best_pair, best_score))
  14. | # best_pair=(1.0, 0.1), best_score=0.965


  1. | model = svm.SVC(C=1., kernel='rbf', gamma = np.power(.1, -2.)/2)
  2. | model.fit(X3, y3.flatten())
  3. | plotData(X3, y3)
  4. | plotBoundary(model, X3)

  1. | # 这我的一个练习画图的,和作业无关,给个画图的参考。
  2. | import numpy as np
  3. | import matplotlib.pyplot as plt
  4. | from sklearn import svm
  5. | # we create 40 separable points
  6. |np.random.seed(0)
  7. | X = np.array([[3,3],[4,3],[1,1]])
  8. | Y = np.array([1,1,-1])
  9. | # fit the model
  10. | clf = svm.SVC(kernel='linear')
  11. | clf.fit(X, Y)
  12. | # get the separating hyperplane
  13. | w = clf.coef_[0]
  14. | a = -w[0] / w[1]
  15. | xx = np.linspace(-5, 5)
  16. | yy = a * xx - (clf.intercept_[0]) / w[1]
  17. | # plot the parallels to the separating hyperplane that pass through the
  18. | # support vectors
  19. | b = clf.support_vectors_[0]
  20. | yy_down = a * xx + (b[1] - a * b[0])
  21. | b = clf.support_vectors_[-1]
  22. | yy_up = a * xx + (b[1] - a * b[0])
  23. | # plot the line, the points, and the nearest vectors to the plane
  24. | plt.figure(figsize=(8,5))
  25. | plt.plot(xx, yy, 'k-')
  26. | plt.plot(xx, yy_down, 'k--')
  27. | plt.plot(xx, yy_up, 'k--')
  28. | # 圈出支持向量
  29. | plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
  30. | s=150, facecolors='none', edgecolors='k', linewidths=1.5)
  31. | plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.rainbow)
  32. | plt.axis('tight')
  33. | plt.show()
  34. | print(clf.decision_function(X))

  1. | [ 1. 1.5 -1. ]


2 Spam Classification

2.1 Preprocessing Emails

这部分用SVM建立一个垃圾邮件分类器。你需要将每个email变成一个n维的特征向量,这个分类器将判断给定一个邮件x是垃圾邮件(y=1)或不是垃圾邮件(y=0)。

take a look at examples from the dataset


  1. | with open('data/emailSample1.txt', 'r') as f:
  2. | email = f.read()
  3. | print(email)


  1. | > Anyone knows how much it costs to host a web portal ?
  2. | >
  3. | Well, it depends on how many visitors you're expecting.
  4. | This can be anywhere from less than 10 bucks a month to a couple of $100.
  5. | You should checkout http://www.rackspace.com/ or perhaps Amazon EC2
  6. | if youre running something big..
  7. | To unsubscribe yourself from this mailing list, send an email to:
  8. | groupname-unsubscribe@egroups.com


可以看到,邮件内容包含 a URL, an email address(at the end), numbers, and dollar amounts. 很多邮件都会包含这些元素,但是每封邮件的具体内容可能会不一样。因此,处理邮件经常采用的方法是标准化这些数据,把所有URL当作一样,所有数字看作一样。

例如,我们用唯一的一个字符串‘httpaddr'来替换所有的URL,来表示邮件包含URL,而不要求具体的URL内容。这通常会提高垃圾邮件分类器的性能,因为垃圾邮件发送者通常会随机化URL,因此在新的垃圾邮件中再次看到任何特定URL的几率非常小。

我们可以做如下处理:

  1. | 1. Lower-casing: 把整封邮件转化为小写。
  2. | 2. Stripping HTML: 移除所有HTML标签,只保留内容。
  3. | 3. Normalizing URLs: 将所有的URL替换为字符串 “httpaddr”.
  4. | 4. Normalizing Email Addresses: 所有的地址替换为 “emailaddr
  5. | 5. Normalizing Dollars: 所有dollar符号($)替换为“dollar”.
  6. | 6. Normalizing Numbers: 所有数字替换为“number
  7. | 7. Word Stemming(词干提取): 将所有单词还原为词源。例如,“discount”, “discounts”, | “discounted” and “discounting”都替换为“discount”。
  8. | 8. Removal of non-words: 移除所有非文字类型,所有的空格(tabs, newlines, spaces)| 调整为一个空格.


  1. | %matplotlib inline
  2. | import numpy as np
  3. | import matplotlib.pyplot as plt
  4. | from scipy.io import loadmat
  5. | from sklearn import svm
  6. | import re #regular expression for e-mail processing
  7. | # 这是一个可用的英文分词算法(Porter stemmer)
  8. | from stemming.porter2 import stem
  9. | # 这个英文算法似乎更符合作业里面所用的代码,与上面效果差不多
  10. | import nltk, nltk.stem.porter


  1. | def processEmail(email):
  2. | """做出了Word Stemming和Removal of non-words的所有处理"""
  3. | email = email.lower()
  4. | email = re.sub('<[^<>]>', ' ', email) # 匹配<开头,然后所有不是< ,> 的内容,直到>结
  5. | 尾,相当于匹配<...>
  6. | email = re.sub('(http|https)://[^s]*', 'httpaddr', email ) # 匹配//后面不是空白字符的内
  7. | 容,遇到空白字符则停止
  8. | email = re.sub('[^s]+@[^s]+', 'emailaddr', email)
  9. | email = re.sub('[$]+', 'dollar', email)
  10. | email = re.sub('[d]+', 'number', email)
  11. | return email


接下来就是提取词干,以及去除非字符内容。

  1. | def email2TokenList(email):
  2. | """预处理数据,返回一个干净的单词列表"""
  3. | # I'll use the NLTK stemmer because it more accurately duplicates the
  4. | # performance of the OCTAVE implementation in the assignment
  5. | stemmer = nltk.stem.porter.PorterStemmer()
  6. | email = preProcess(email)
  7. | # 将邮件分割为单个单词,re.split() 可以设置多种分隔符
  8. | tokens = re.split('[ @$/#.-:&*+=[]?!(){},'">_<;%]', email)
  9. | # 遍历每个分割出来的内容
  10. | tokenlist = []
  11. | for token in tokens:
  12. | # 删除任何非字母数字的字符
  13. | token = re.sub('[^a-zA-Z0-9]', '', token);
  14. | # Use the Porter stemmer to 提取词根
  15. | stemmed = stemmer.stem(token)
  16. | # 去除空字符串‘',里面不含任何字符
  17. | if not len(token): continue
  18. | tokenlist.append(stemmed)
  19. | return tokenlist


2.1.1 Vocabulary List(词汇表)

在对邮件进行预处理之后,我们有一个处理后的单词列表。下一步是选择我们想在分类器中使用哪些词,我们需要去除哪些词。

我们有一个词汇表vocab.txt,里面存储了在实际中经常使用的单词,共1899个。

我们要算出处理后的email中含有多少vocab.txt中的单词,并返回在vocab.txt中的index,这就我们想要的训练单词的索引。

  1. | def email2VocabIndices(email, vocab):
  2. | """提取存在单词的索引"""
  3. | token = email2TokenList(email)
  4. | index = [i for i in range(len(vocab)) if vocab[i] in token ]
  5. | return index


2.2 Extracting Features from Emails

  1. | def email2FeatureVector(email):
  2. | """
  3. | 将email转化为词向量,n是vocab的长度。存在单词的相应位置的值置为1,其余为0
  4. | """
  5. | df = pd.read_table('data/vocab.txt',names=['words'])
  6. | vocab = df.as_matrix() # return array
  7. | vector = np.zeros(len(vocab)) # init vector
  8. | vocab_indices = email2VocabIndices(email, vocab) # 返回含有单词的索引
  9. | # 将有单词的索引置为1
  10. | for i in vocab_indices:
  11. | vector[i] = 1
  12. | return vector


  1. | vector = email2FeatureVector(email)
  2. | print('length of vector = {} num of non-zero = {}'.format(len(vector), int(vector.sum())))
  3. | length of vector = 1899
  4. | num of non-zero = 45


2.3 Training SVM for Spam Classification

读取已经训提取好的特征向量以及相应的标签。分训练集和测试集。

  1. | # Training set
  2. | mat1 = loadmat('data/spamTrain.mat')
  3. | X, y = mat1['X'], mat1['y']
  4. | # Test set
  5. | mat2 = scipy.io.loadmat('data/spamTest.mat')
  6. | Xtest, ytest = mat2['Xtest'], mat2['ytest']


  1. | clf = svm.SVC(C=0.1, kernel='linear')
  2. | clf.fit(X, y)


2.4 Top Predictors for Spam

  1. | predTrain = clf.score(X, y)
  2. | predTest = clf.score(Xtest, ytest)
  3. | predTrain, predTest
  4. | (0.99825, 0.989)


到此这篇关于机器学习SVM支持向量机的练习文章就介绍到这了,更多相关机器学习内容请关注博主或继续浏览下面的相关文章,希望大家以后多多支持!

展开阅读全文

页面更新:2024-05-07

标签:向量   词干   机器   垃圾邮件   单词   函数   字符   索引   邮件   内容

1 2 3 4 5

上滑加载更多 ↓
推荐阅读:
友情链接:
更多:

本站资料均由网友自行发布提供,仅用于学习交流。如有版权问题,请与我联系,QQ:4156828  

© CopyRight 2008-2024 All Rights Reserved. Powered By bs178.com 闽ICP备11008920号-3
闽公网安备35020302034844号

Top