决策树例题经典案例(5机器学习)

 2025-08-10 00:42:02  阅读 500  评论 0

摘要:大多数机器学习算法可以归类为有监督学习(supervised learning)或无监督学习(unsupervised learning)。对于有监督学习,数据集中的每个数据实例都必须包含目标属性值。因此,在使用有监督学习算法训练模型之前,需要投入大量的时间和精力来创建具有目标属性值的数据集。线

大多数机器学习算法可以归类为有监督学习(supervised learning)或无监督学习(unsupervised learning)。对于有监督学习,数据集中的每个数据实例都必须包含目标属性值。因此,在使用有监督学习算法训练模型之前,需要投入大量的时间和精力来创建具有目标属性值的数据集。

线性回归、神经网络、决策树都是有监督学习中的成员。ofter之前在两篇文章中已经详细介绍了线性回归和神经网络,感兴趣的可以去看一下。线性回归和神经网络适合处理数值类型的输入,而像数据集中的输入属性主要是标称的,那么使用决策树模型会更合适。

1、决策树的简介

1.1 功能

决策树以树结构的形式构建分类或回归模型。其实就是通过一系列判断,树叶最终显示分类(接不接受工作?不接受)或回归(二手电脑能卖多少钱?10元)

1.2 构造算法和指标

ID3(信息增益)、C4.5(信息增益率)、C5.0(C4.5的改进版)、CART(基尼系数)。

1.3 构造方法

无论采用哪种算法和指标,构造树节点的思路是一样的。比如,我们有如下图数据集

图1-1 放款资质

通过某个或多个指标的计算,得出哪个属性(年龄段/有工作/有自己的房子/信贷情况/是否给贷款)应该显示在哪个树节点?当然指标计算得分过低的,我们就可以进行剪枝处理,即不显示该属性。我们以C4.5算法,看下计算得到的决策树是怎样的?

图1-2 C4.5决策树

2、构造算法

2.1 ID3

= (before) − (after)

ID3 算法使用信息增益来选择属性。我们根据图1-1的数据集,用ID3算法构造决策树。

图2-1 ID3决策树

我们看下计算的过程,第一个树节点的计算过程如下图:

图2-2 ID3首次最优索引

很显然,我们看到第2个特征(有自己的房子)的信息增益最优。

图2-3 数据集属性

然而,信息增益有一个问题,它偏向于选择数据集中具有更多值的属性。因此,有了C4.5算法。

2.2 C4.5

=() / ()

C4.5算法使用信息增益率来选择属性。我们根据图1-1的数据集,用C4.5算法构造决策树。

图2-4 C4.5决策树

我们看下计算的过程,第一个树节点的计算过程如下图:

图2-5 C4.5首次最优索引

很显然,我们看到第2个特征(有自己的房子)的信息增益率最优。

2.3 CART

CART算法使用基尼系数来选择属性。我们根据图1-1的数据集,用CART算法构造决策树。

图2-6 CART决策树

我们看下计算的过程,第一个树节点的计算过程如下图:

图2-7 CART首次最优索引

这里需要说明下:基尼系数最大为“1”,最小等于“0”。基尼系数越接近0表明分配越是趋向平等。换句话说,如果完全分类,基尼指数将为零。我们需要选择基尼系数低的特征。

3、决策树的应用

机器学习应用的真正挑战是找到学习偏见中与特定数据集最匹配的算法。因此,我们需要了解每种模型/算法的应用场景。

案例1:【分类】银行系统审核贷款人资质

案例2:【回归/概率】员工是否会离职

案例3:【回归/价值】预测二手商品的价值

4、源代码

本案例使用的完整源代码:

tree.py

from math import logimport operatorimport treePlotterfrom collections import Counterpre_pruning = Truepost_pruning = Truedef read_dataset(filename):    """    年龄段:0代表青年,1代表中年,2代表老年;    有工作:0代表否,1代表是;    有自己的房子:0代表否,1代表是;    信贷情况:0代表一般,1代表好,2代表非常好;    类别(是否给贷款):0代表否,1代表是    """    fr = open(filename, 'r')    all_lines = fr.readlines()  # list形式,每行为1个str    # print all_lines    labels = ['年龄段', '有工作', '有自己的房子', '信贷情况']    # featname=all_lines[0].strip().split(',')  #list形式    # featname=featname[:-1]    labelCounts = {}    dataset = []    for line in all_lines[0:]:        line = line.strip().split(',')  # 以逗号为分割符拆分列表        dataset.append(line)    return dataset, labelsdef read_testset(testfile):    """    年龄段:0代表青年,1代表中年,2代表老年;    有工作:0代表否,1代表是;    有自己的房子:0代表否,1代表是;    信贷情况:0代表一般,1代表好,2代表非常好;    类别(是否给贷款):0代表否,1代表是    """    fr = open(testfile, 'r')    all_lines = fr.readlines()    testset = []    for line in all_lines[0:]:        line = line.strip().split(',')  # 以逗号为分割符拆分列表        testset.append(line)    return testset# 计算信息熵def cal_entropy(dataset):    numEntries = len(dataset)    labelCounts = {}    # 给所有可能分类创建字典    for featVec in dataset:        currentlabel = featVec[-1]        if currentlabel not in labelCounts.keys():            labelCounts[currentlabel] = 0        labelCounts[currentlabel] += 1    Ent = 0.0    for key in labelCounts:        p = float(labelCounts[key]) / numEntries        Ent = Ent - p * log(p, 2)  # 以2为底求对数    return Ent# 划分数据集def splitdataset(dataset, axis, value):    retdataset = []  # 创建返回的数据集列表    for featVec in dataset:  # 抽取符合划分特征的值        if featVec[axis] == value:            reducedfeatVec = featVec[:axis]  # 去掉axis特征            reducedfeatVec.extend(featVec[axis + 1:])  # 将符合条件的特征添加到返回的数据集列表            retdataset.append(reducedfeatVec)    return retdataset'''选择最好的数据集划分方式ID3算法:以信息增益为准则选择划分属性C4.5算法:使用“增益率”来选择划分属性'''# ID3算法def ID3_chooseBestFeatureToSplit(dataset):    numFeatures = len(dataset[0]) - 1    baseEnt = cal_entropy(dataset)    bestInfoGain = 0.0    bestFeature = -1    for i in range(numFeatures):  # 遍历所有特征        # for example in dataset:        # featList=example[i]        featList = [example[i] for example in dataset]        uniqueVals = set(featList)  # 将特征列表创建成为set集合,元素不可重复。创建唯一的分类标签列表        newEnt = 0.0        for value in uniqueVals:  # 计算每种划分方式的信息熵            subdataset = splitdataset(dataset, i, value)            p = len(subdataset) / float(len(dataset))            newEnt += p * cal_entropy(subdataset)        infoGain = baseEnt - newEnt        print(u"ID3中第%d个特征的信息增益为:%.3f" % (i, infoGain))        if (infoGain > bestInfoGain):            bestInfoGain = infoGain  # 计算最好的信息增益            bestFeature = i    return bestFeature# C4.5算法def C45_chooseBestFeatureToSplit(dataset):    numFeatures = len(dataset[0]) - 1    baseEnt = cal_entropy(dataset)    bestInfoGain_ratio = 0.0    bestFeature = -1    for i in range(numFeatures):  # 遍历所有特征        featList = [example[i] for example in dataset]        uniqueVals = set(featList)  # 将特征列表创建成为set集合,元素不可重复。创建唯一的分类标签列表        newEnt = 0.0        IV = 0.0        for value in uniqueVals:  # 计算每种划分方式的信息熵            subdataset = splitdataset(dataset, i, value)            p = len(subdataset) / float(len(dataset))            newEnt += p * cal_entropy(subdataset)            IV = IV - p * log(p, 2)        infoGain = baseEnt - newEnt        if (IV == 0):  # fix the overflow bug            continue        infoGain_ratio = infoGain / IV  # 这个feature的infoGain_ratio        print(u"C4.5中第%d个特征的信息增益率为:%.3f" % (i, infoGain_ratio))        if (infoGain_ratio > bestInfoGain_ratio):  # 选择最大的gain ratio            bestInfoGain_ratio = infoGain_ratio            bestFeature = i  # 选择最大的gain ratio对应的feature    return bestFeature# CART算法def CART_chooseBestFeatureToSplit(dataset):    numFeatures = len(dataset[0]) - 1    bestGini = 999999.0    bestFeature = -1    for i in range(numFeatures):        featList = [example[i] for example in dataset]        uniqueVals = set(featList)        gini = 0.0        for value in uniqueVals:            subdataset = splitdataset(dataset, i, value)            p = len(subdataset) / float(len(dataset))            subp = len(splitdataset(subdataset, -1, '0')) / float(len(subdataset))        gini += p * (1.0 - pow(subp, 2) - pow(1 - subp, 2))        print(u"CART中第%d个特征的基尼值为:%.3f" % (i, gini))        if (gini < bestGini):            bestGini = gini            bestFeature = i    return bestFeaturedef majorityCnt(classList):    '''    数据集已经处理了所有属性,但是类标签依然不是唯一的,    此时我们需要决定如何定义该叶子节点,在这种情况下,我们通常会采用多数表决的方法决定该叶子节点的分类    '''    classCont = {}    for vote in classList:        if vote not in classCont.keys():            classCont[vote] = 0        classCont[vote] += 1    sortedClassCont = sorted(classCont.items(), key=operator.itemgetter(1), reverse=True)    return sortedClassCont[0][0]# 利用ID3算法创建决策树def ID3_createTree(dataset, labels, test_dataset):    classList = [example[-1] for example in dataset]    if classList.count(classList[0]) == len(classList):        # 类别完全相同,停止划分        return classList[0]    if len(dataset[0]) == 1:        # 遍历完所有特征时返回出现次数最多的        return majorityCnt(classList)    bestFeat = ID3_chooseBestFeatureToSplit(dataset)    bestFeatLabel = labels[bestFeat]    print(u"此时最优索引为:" + (bestFeatLabel))    ID3Tree = {bestFeatLabel: {}}    del (labels[bestFeat])    # 得到列表包括节点所有的属性值    featValues = [example[bestFeat] for example in dataset]    uniqueVals = set(featValues)    if pre_pruning:        ans = []        for index in range(len(test_dataset)):            ans.append(test_dataset[index][-1])        result_counter = Counter()        for vec in dataset:            result_counter[vec[-1]] += 1        leaf_output = result_counter.most_common(1)[0][0]        root_acc = cal_acc(test_output=[leaf_output] * len(test_dataset), label=ans)        outputs = []        ans = []        for value in uniqueVals:            cut_testset = splitdataset(test_dataset, bestFeat, value)            cut_dataset = splitdataset(dataset, bestFeat, value)            for vec in cut_testset:                ans.append(vec[-1])            result_counter = Counter()            for vec in cut_dataset:                result_counter[vec[-1]] += 1            leaf_output = result_counter.most_common(1)[0][0]            outputs += [leaf_output] * len(cut_testset)        cut_acc = cal_acc(test_output=outputs, label=ans)        if cut_acc <= root_acc:            return leaf_output    for value in uniqueVals:        subLabels = labels[:]        ID3Tree[bestFeatLabel][value] = ID3_createTree(            splitdataset(dataset, bestFeat, value),            subLabels,            splitdataset(test_dataset, bestFeat, value))    if post_pruning:        tree_output = classifytest(ID3Tree,                                   featLabels=['年龄段', '有工作', '有自己的房子', '信贷情况'],                                   testDataSet=test_dataset)        ans = []        for vec in test_dataset:            ans.append(vec[-1])        root_acc = cal_acc(tree_output, ans)        result_counter = Counter()        for vec in dataset:            result_counter[vec[-1]] += 1        leaf_output = result_counter.most_common(1)[0][0]        cut_acc = cal_acc([leaf_output] * len(test_dataset), ans)        if cut_acc >= root_acc:            return leaf_output    return ID3Treedef C45_createTree(dataset, labels, test_dataset):    classList = [example[-1] for example in dataset]    if classList.count(classList[0]) == len(classList):        # 类别完全相同,停止划分        return classList[0]    if len(dataset[0]) == 1:        # 遍历完所有特征时返回出现次数最多的        return majorityCnt(classList)    bestFeat = C45_chooseBestFeatureToSplit(dataset)    bestFeatLabel = labels[bestFeat]    print(u"此时最优索引为:" + (bestFeatLabel))    C45Tree = {bestFeatLabel: {}}    del (labels[bestFeat])    # 得到列表包括节点所有的属性值    featValues = [example[bestFeat] for example in dataset]    uniqueVals = set(featValues)    if pre_pruning:        ans = []        for index in range(len(test_dataset)):            ans.append(test_dataset[index][-1])        result_counter = Counter()        for vec in dataset:            result_counter[vec[-1]] += 1        leaf_output = result_counter.most_common(1)[0][0]        root_acc = cal_acc(test_output=[leaf_output] * len(test_dataset), label=ans)        outputs = []        ans = []        for value in uniqueVals:            cut_testset = splitdataset(test_dataset, bestFeat, value)            cut_dataset = splitdataset(dataset, bestFeat, value)            for vec in cut_testset:                ans.append(vec[-1])            result_counter = Counter()            for vec in cut_dataset:                result_counter[vec[-1]] += 1            leaf_output = result_counter.most_common(1)[0][0]            outputs += [leaf_output] * len(cut_testset)        cut_acc = cal_acc(test_output=outputs, label=ans)        if cut_acc <= root_acc:            return leaf_output    for value in uniqueVals:        subLabels = labels[:]        C45Tree[bestFeatLabel][value] = C45_createTree(            splitdataset(dataset, bestFeat, value),            subLabels,            splitdataset(test_dataset, bestFeat, value))    if post_pruning:        tree_output = classifytest(C45Tree,                                   featLabels=['年龄段', '有工作', '有自己的房子', '信贷情况'],                                   testDataSet=test_dataset)        ans = []        for vec in test_dataset:            ans.append(vec[-1])        root_acc = cal_acc(tree_output, ans)        result_counter = Counter()        for vec in dataset:            result_counter[vec[-1]] += 1        leaf_output = result_counter.most_common(1)[0][0]        cut_acc = cal_acc([leaf_output] * len(test_dataset), ans)        if cut_acc >= root_acc:            return leaf_output    return C45Treedef CART_createTree(dataset, labels, test_dataset):    classList = [example[-1] for example in dataset]    if classList.count(classList[0]) == len(classList):        # 类别完全相同,停止划分        return classList[0]    if len(dataset[0]) == 1:        # 遍历完所有特征时返回出现次数最多的        return majorityCnt(classList)    bestFeat = CART_chooseBestFeatureToSplit(dataset)    # print(u"此时最优索引为:"+str(bestFeat))    bestFeatLabel = labels[bestFeat]    print(u"此时最优索引为:" + (bestFeatLabel))    CARTTree = {bestFeatLabel: {}}    del (labels[bestFeat])    # 得到列表包括节点所有的属性值    featValues = [example[bestFeat] for example in dataset]    uniqueVals = set(featValues)    if pre_pruning:        ans = []        for index in range(len(test_dataset)):            ans.append(test_dataset[index][-1])        result_counter = Counter()        for vec in dataset:            result_counter[vec[-1]] += 1        leaf_output = result_counter.most_common(1)[0][0]        root_acc = cal_acc(test_output=[leaf_output] * len(test_dataset), label=ans)        outputs = []        ans = []        for value in uniqueVals:            cut_testset = splitdataset(test_dataset, bestFeat, value)            cut_dataset = splitdataset(dataset, bestFeat, value)            for vec in cut_testset:                ans.append(vec[-1])            result_counter = Counter()            for vec in cut_dataset:                result_counter[vec[-1]] += 1            leaf_output = result_counter.most_common(1)[0][0]            outputs += [leaf_output] * len(cut_testset)        cut_acc = cal_acc(test_output=outputs, label=ans)        if cut_acc <= root_acc:            return leaf_output    for value in uniqueVals:        subLabels = labels[:]        CARTTree[bestFeatLabel][value] = CART_createTree(            splitdataset(dataset, bestFeat, value),            subLabels,            splitdataset(test_dataset, bestFeat, value))        if post_pruning:            tree_output = classifytest(CARTTree,                                       featLabels=['年龄段', '有工作', '有自己的房子', '信贷情况'],                                       testDataSet=test_dataset)            ans = []            for vec in test_dataset:                ans.append(vec[-1])            root_acc = cal_acc(tree_output, ans)            result_counter = Counter()            for vec in dataset:                result_counter[vec[-1]] += 1            leaf_output = result_counter.most_common(1)[0][0]            cut_acc = cal_acc([leaf_output] * len(test_dataset), ans)            if cut_acc >= root_acc:                return leaf_output    return CARTTreedef classify(inputTree, featLabels, testVec):    """    输入:决策树,分类标签,测试数据    输出:决策结果    描述:跑决策树    """    firstStr = list(inputTree.keys())[0]    secondDict = inputTree[firstStr]    featIndex = featLabels.index(firstStr)    classLabel = '0'    for key in secondDict.keys():        if testVec[featIndex] == key:            if type(secondDict[key]).__name__ == 'dict':                classLabel = classify(secondDict[key], featLabels, testVec)            else:                classLabel = secondDict[key]    return classLabeldef classifytest(inputTree, featLabels, testDataSet):    """    输入:决策树,分类标签,测试数据集    输出:决策结果    描述:跑决策树    """    classLabelAll = []    for testVec in testDataSet:        classLabelAll.append(classify(inputTree, featLabels, testVec))    return classLabelAlldef cal_acc(test_output, label):    """    :param test_output: the output of testset    :param label: the answer    :return: the acc of    """    assert len(test_output) == len(label)    count = 0    for index in range(len(test_output)):        if test_output[index] == label[index]:            count += 1    return float(count / len(test_output))if __name__ == '__main__':    filename = 'dataset.txt'    testfile = 'testset.txt'    dataset, labels = read_dataset(filename)    # dataset,features=createDataSet()    print('dataset', dataset)    print("---------------------------------------------")    print(u"数据集长度", len(dataset))    print("Ent(D):", cal_entropy(dataset))    print("---------------------------------------------")    print(u"以下为首次寻找最优索引:\n")    print(u"ID3算法的最优特征索引为:" + str(ID3_chooseBestFeatureToSplit(dataset)))    print("--------------------------------------------------")    print(u"C4.5算法的最优特征索引为:" + str(C45_chooseBestFeatureToSplit(dataset)))    print("--------------------------------------------------")    print(u"CART算法的最优特征索引为:" + str(CART_chooseBestFeatureToSplit(dataset)))    print(u"首次寻找最优索引结束!")    print("---------------------------------------------")    print(u"下面开始创建相应的决策树-------")    while True:        dec_tree = '3'        # ID3决策树        if dec_tree == '1':            labels_tmp = labels[:]  # 拷贝,createTree会改变labels            ID3desicionTree = ID3_createTree(dataset, labels_tmp, test_dataset=read_testset(testfile))            print('ID3desicionTree:\n', ID3desicionTree)            # treePlotter.createPlot(ID3desicionTree)            treePlotter.ID3_Tree(ID3desicionTree)            testSet = read_testset(testfile)            print("下面为测试数据集结果:")            print('ID3_TestSet_classifyResult:\n', classifytest(ID3desicionTree, labels, testSet))            print("---------------------------------------------")        # C4.5决策树        if dec_tree == '2':            labels_tmp = labels[:]  # 拷贝,createTree会改变labels            C45desicionTree = C45_createTree(dataset, labels_tmp, test_dataset=read_testset(testfile))            print('C45desicionTree:\n', C45desicionTree)            treePlotter.C45_Tree(C45desicionTree)            testSet = read_testset(testfile)            print("下面为测试数据集结果:")            print('C4.5_TestSet_classifyResult:\n', classifytest(C45desicionTree, labels, testSet))            print("---------------------------------------------")        # CART决策树        if dec_tree == '3':            labels_tmp = labels[:]  # 拷贝,createTree会改变labels            CARTdesicionTree = CART_createTree(dataset, labels_tmp, test_dataset=read_testset(testfile))            print('CARTdesicionTree:\n', CARTdesicionTree)            treePlotter.CART_Tree(CARTdesicionTree)            testSet = read_testset(testfile)            print("下面为测试数据集结果:")            print('CART_TestSet_classifyResult:\n', classifytest(CARTdesicionTree, labels, testSet))        break

dataset.txt中的数据集:

0,0,0,0,00,0,0,1,00,1,0,1,10,1,1,0,10,0,0,0,01,0,0,0,01,0,0,1,01,1,1,1,11,0,1,2,11,0,1,2,12,0,1,2,12,0,1,1,12,1,0,1,12,1,0,2,12,0,0,0,02,0,0,2,0

treePlotter.py

import matplotlib.pyplot as pltfrom pylab import mplmpl.rcParams['font.sans-serif'] = ['SimHei']decisionNode = dict(boxstyle="sawtooth", fc="0.8")leafNode = dict(boxstyle="round4", fc="0.8")arrow_args = dict(arrowstyle="<-")def plotNode(nodeTxt, centerPt, parentPt, nodeType):    createPlot.ax1.annotate(nodeTxt, xy=parentPt, xycoords='axes fraction', \                            xytext=centerPt, textcoords='axes fraction', \                            va="center", ha="center", bbox=nodeType, arrowprops=arrow_args)def getNumLeafs(myTree):    numLeafs = 0    firstStr = list(myTree.keys())[0]    secondDict = myTree[firstStr]    for key in secondDict.keys():        if type(secondDict[key]).__name__ == 'dict':            numLeafs += getNumLeafs(secondDict[key])        else:            numLeafs += 1    return numLeafsdef getTreeDepth(myTree):    maxDepth = 0    firstStr = list(myTree.keys())[0]    secondDict = myTree[firstStr]    for key in secondDict.keys():        if type(secondDict[key]).__name__ == 'dict':            thisDepth = getTreeDepth(secondDict[key]) + 1        else:            thisDepth = 1        if thisDepth > maxDepth:            maxDepth = thisDepth    return maxDepthdef plotMidText(cntrPt, parentPt, txtString):    xMid = (parentPt[0] - cntrPt[0]) / 2.0 + cntrPt[0]    yMid = (parentPt[1] - cntrPt[1]) / 2.0 + cntrPt[1]    createPlot.ax1.text(xMid, yMid, txtString)def plotTree(myTree, parentPt, nodeTxt):    numLeafs = getNumLeafs(myTree)    depth = getTreeDepth(myTree)    firstStr = list(myTree.keys())[0]    cntrPt = (plotTree.xOff + (1.0 + float(numLeafs)) / 2.0 / plotTree.totalw, plotTree.yOff)    plotMidText(cntrPt, parentPt, nodeTxt)    plotNode(firstStr, cntrPt, parentPt, decisionNode)    secondDict = myTree[firstStr]    plotTree.yOff = plotTree.yOff - 1.0 / plotTree.totalD    for key in secondDict.keys():        if type(secondDict[key]).__name__ == 'dict':            plotTree(secondDict[key], cntrPt, str(key))        else:            plotTree.xOff = plotTree.xOff + 1.0 / plotTree.totalw            plotNode(secondDict[key], (plotTree.xOff, plotTree.yOff), cntrPt, leafNode)            plotMidText((plotTree.xOff, plotTree.yOff), cntrPt, str(key))    plotTree.yOff = plotTree.yOff + 1.0 / plotTree.totalDdef createPlot(inTree):    fig = plt.figure(1, facecolor='white')    fig.clf()    axprops = dict(xticks=[], yticks=[])    createPlot.ax1 = plt.subplot(111, frameon=False, **axprops)    plotTree.totalw = float(getNumLeafs(inTree))    plotTree.totalD = float(getTreeDepth(inTree))    plotTree.xOff = -0.5 / plotTree.totalw    plotTree.yOff = 1.0    plotTree(inTree, (0.5, 1.0), '')    #plt.show()#ID3决策树def ID3_Tree(inTree):    fig = plt.figure(1, facecolor='white')    fig.clf()    axprops = dict(xticks=[], yticks=[])    createPlot.ax1 = plt.subplot(111, frameon=False, **axprops)    plotTree.totalw = float(getNumLeafs(inTree))    plotTree.totalD = float(getTreeDepth(inTree))    plotTree.xOff = -0.5 / plotTree.totalw    plotTree.yOff = 1.0    plotTree(inTree, (0.5, 1.0), '')    plt.title("ID3决策树",fontsize=12,color='red')    plt.show()#C4.5决策树def C45_Tree(inTree):    fig = plt.figure(2, facecolor='white')    fig.clf()    axprops = dict(xticks=[], yticks=[])    createPlot.ax1 = plt.subplot(111, frameon=False, **axprops)    plotTree.totalw = float(getNumLeafs(inTree))    plotTree.totalD = float(getTreeDepth(inTree))    plotTree.xOff = -0.5 / plotTree.totalw    plotTree.yOff = 1.0    plotTree(inTree, (0.5, 1.0), '')    plt.title("C4.5决策树",fontsize=12,color='red')    plt.show()#CART决策树def CART_Tree(inTree):    fig = plt.figure(3, facecolor='white')    fig.clf()    axprops = dict(xticks=[], yticks=[])    createPlot.ax1 = plt.subplot(111, frameon=False, **axprops)    plotTree.totalw = float(getNumLeafs(inTree))    plotTree.totalD = float(getTreeDepth(inTree))    plotTree.xOff = -0.5 / plotTree.totalw    plotTree.yOff = 1.0    plotTree(inTree, (0.5, 1.0), '')    plt.title("CART决策树",fontsize=12,color='red')    plt.show()

版权声明:我们致力于保护作者版权,注重分享,被刊用文章【决策树例题经典案例(5机器学习)】因无法核实真实出处,未能及时与作者取得联系,或有版权异议的,请联系管理员,我们会立即处理! 部分文章是来自自研大数据AI进行生成,内容摘自(百度百科,百度知道,头条百科,中国民法典,刑法,牛津词典,新华词典,汉语词典,国家院校,科普平台)等数据,内容仅供学习参考,不准确地方联系删除处理!;

原文链接:https://www.yxiso.com/zhishi/2045410.html

发表评论:

关于我们
院校搜的目标不仅是为用户提供数据和信息,更是成为每一位学子梦想实现的桥梁。我们相信,通过准确的信息与专业的指导,每一位学子都能找到属于自己的教育之路,迈向成功的未来。助力每一个梦想,实现更美好的未来!
联系方式
电话:
地址:广东省中山市
Email:beimuxi@protonmail.com

Copyright © 2022 院校搜 Inc. 保留所有权利。 Powered by BEIMUCMS 3.0.3

页面耗时0.0699秒, 内存占用1.97 MB, 访问数据库24次

陕ICP备14005772号-15