决策树算法初尝之ID3

最近学习Machine Learning ,发觉颇为有趣。准备使用相关算法来跑一跑基因组,做做机器人的控制。

决策树是一种颇为有趣的东西,其基本原理是所谓的Occam's razor:优先选择拟合数据最简单的假设。于是自然的在java上写了个决策树玩玩。

废话少说,看看有趣的代码

  1. ID3(String[][] data,boolean[] value) throws IOException//通过一堆String和一个bool数组建立这个决策树,log是一些调试信息
  2.     {
  3.         int i,n;
  4.         log=new PrintWriter(new FileWriter(new File("fout.txt")),true);
  5.         Set<obje> dat=new HashSet<obje>();//Set 巨好用!
  6.         for(i=0;i<data.length;i++)
  7.             dat.add(new obje(data[i],value[i]));
  8.         root=new node(dat);
  9.     }

ID3是一种最原始的决策树,也就是我们今天谈论的对象。

写在决策树时候,我是这么搞递归建立过程。

  1.         node(Set<obje> data)
  2.         {
  3.             int i=0,j,tem=0,n;
  4.             double max=0;
  5.             obje top=data.iterator().next();
  6.             n=top.numv;
  7.             no=++num;//节点编号,方便debug
  8.             if(sam(data))//如果结论相同,那么就开心的得出决策树的决策啦
  9.             {
  10.                 res=data.iterator().next().res;
  11.                 return;
  12.             }
  13.  
  14.             double gas;
  15.             for(i=0;i<n;i++)
  16.                 if(!top.seleted[i])//这个选项选过了那么就不要它了
  17.                 {
  18.                     gas=Gain(data,i);//选一个信息增益最好的点
  19.                     if(max<gas)
  20.                     {
  21.                         max=gas;
  22.                         tem=i;
  23.                     }
  24.                 }
  25.             item=tem;
  26.             if(max<1e-4)//如果信息增益都太小就无可可选了,倒有可能是数据有误差或者错误,此处是ID3基本算法的局限的,在改进中此行应该把结论以概率形式给出
  27.                 return;
  28.             HashMap<String,HashSet<obje> > lf=new HashMap<String,HashSet<obje>>();//一个map拖一个set,是用来造树的,决策树以后难免遇到可怕的情况,还是提前hash了保证速度安全
  29.             for(obje ij:data)//for_each每个选项
  30.                 if(lf.get(ij.field[item])==null)//是空的就把它放到对应选项的集合下。
  31.                 {
  32.                     HashSet<obje> bra=new HashSet<obje>();
  33.                     ij.seleted[item]=true;
  34.                     bra.add(ij);
  35.                     lf.put(ij.field[item],bra);
  36.                 }
  37.             else//空的就创造一个集合
  38.             {
  39.                 ij.seleted[item]=true;
  40.                 lf.get(ij.field[item]).add(ij);
  41.             }
  42.             
  43.             Set<Map.Entry<String,HashSet<obje>>> lfs=lf.entrySet();//哈~颇为奇葩的建树方法,用一个map解决,这样树有几万个分岔也不怕!兄弟孩子表示效率不高的
  44.  
  45.             for(Map.Entry<String,HashSet<obje>> entry:lfs)
  46.                 leaf.put( entry.getKey(),new node(entry.getValue()) );    
  47.         }
  48.     }

于是呢,我的建立树的任务就用new node这种来实现了,虽然这是一个技术问题,但是忍不住多提一句,发现面对对象编程和函数式编程颇有异曲同工之妙诶

比如,如下是我的haskell二叉树的建立

  1. -- this is haskell code
  2. insert :: (Ord a) => a-> Tree a-> Tree a
  3. insert x Empty=leaf x
  4. insert x (Node v l r)
  5.     |x>=Node v l (insert x r)
  6.     |x<=Node v (insert x l) r
  7.     |otherwise=Node x l r

是不是很像呢。。。

好了不跑题了,说说代码本身吧。

决策树是在一堆各种命题得出结论,总体感觉和K-Dimensions Tree部分类似,就是一堆特征找啊找,不过决策树用了一个叫信息熵的玩意(似乎是香农老爷子搞的东西?但是作为物理系学生总是想起来可爱的麦克斯韦小妖精),使得每次找一个最短的属性项来将集合划分。ID3每次计算一个叫做信息增益的东西,也就是上文的Gain,Gain越大的则说明这个选项的有效度越高,则可以取之为信。于是我们从Mitchell同志网站上找的数据拿来实验下(顺便提一句,cmu的这位大神把自己的书放到网上公开,而且他的样例程序都是看不懂的Lisp)

  1. protected double Gain(Set<obje> data,int k)
  2.     {
  3.         return Entropy0(data,k)+Entropy(data,k);
  4.     }

Gain是这么一个东西,Entropy0是data对于某一属性的熵,Entropy是该属性对于每一值熵的和

  1. protected double Entropy0(Set<obje> data,int k)
  1.     {
  2.         HashMap <String,Integer> hm=new HashMap<String,Integer>();
  3.         int len=data.size();
  4.         int i=0;
  5.         double res=0,temp;
  6.         double up=0,down=0;
  7.         for(obje obj:data)
  8.             if(obj.res)
  9.                 up++;
  10.             else
  11.                 down++;
  12.         return -(up/(up+down)*log2(up/(up+down))
  13.                 +down/(up+down)*log2(down/(up+down)));
  14.     }
  15.  
  16.     protected double Entropy(Set<obje> data,int k)
  17.     {
  18.         HashMap <String,HashSet<obje>> hm=new HashMap<String,HashSet<obje>>();
  19.         double len=data.size();
  20.         int i=0;
  21.         double res=0,temp;
  22.         for(obje tem:data)
  23.             if(hm.get(tem.field[k])==null)
  24.             {
  25.                 HashSet<obje> set=new HashSet<obje>();
  26.                 set.add(tem);
  27.                 hm.put(tem.field[k],set);
  28.             }
  29.             else
  30.                 hm.get(tem.field[k]).add(tem);
  31.  
  32.         Set<Map.Entry<String,HashSet<obje>>> set = hm.entrySet();
  33.         double up,down;
  34.         for(Map.Entry<String, HashSet<obje>> entry: set)
  35.         {
  36.             up=0;down=0;
  37.             HashSet<obje> set2=entry.getValue();
  38.             for(obje ob1:set2)
  39.             {
  40.                 if(ob1.res)
  41.                     up++;
  42.                 else
  43.                     down++;
  44.             }
  45.             double t=up/(up+down)*log2(up/(up+down))
  46.                 +down/(up+down)*log2(down/(up+down));
  47.             res+=(up+down)/len*t;
  48.         }
  49.         return res;
  50.     

具体熵什么的去看Mitchell大神的书啦。。要打扎实基础哦(其实是困了改天再写)反正这玩意越大则数据就分化明显。

比如这个数据

  1. 0     1    2   3   4   5
  2. {"sunny","hot","high","weak","0"},
  3. {"sunny","hot","high","strong","0"},
  4. {"overcast","hot","normal","weak","1"},
  5. {"rain","mild","high","weak","1"},
  6. {"rain","cool","normal","weak","1"},
  7. {"rain","cool","normal","strong","0"},
  8. {"overcast","cool","normal","strong","1"},
  9. {"sunny","mild","high","weak","0"},
  10. {"sunny","cool","normal","weak","1"},
  11. {"rain","mild","normal","weak","1"},
  12. {"sunny","mild","normal","strong","1"},
  13. {"overcast","mild","high","strong","1"},
  14. {"overcast","hot","normal","weak","1"},
  15. {"rain","mild","high","strong","0"}

无妨认为(幻想为)是xuhao出去和妹纸约会的故事

0~4属性分别代表天气的各种情况,阳光,温度,湿度,风速,0or1代表是否(幻想)去约会。

然后Jre去找了mathematica简单的处理了自己的数据告诉我们这样的结果

 

treedata

 

嗯,一颗树!,这就是产生决策的树啦,我们可以看到,只要overcast了,xuhao就会不由自主,但还有其他可能。但是有几个有趣的问题,其一是我们更改下数据

比如,加一个错的!

加上这个

{"overcast","cool","normal","weak","0"}

(我们认为这个是错的哈,因为根据常识cool的天气更乐于出去玩,上面的判断这个无论如何也要去约妹纸,只能是其他原因比如吃豆角住医院这种不可抗拒因素引起的,那么就是误差了!)

但是呢,树就变的不可理解了

treedatawrong

 

这是什么家伙!都是些什么结论嘛,本来约妹纸搞的莫名奇妙。

再比如,偷偷加一列

  1. {"1","sunny","hot","high","weak","0"},
  2. {"2","sunny","hot","high","strong","0"},
  3. {"3","overcast","hot","normal","weak","1"},
  4. {"4","rain","mild","high","weak","1"},
  5. {"5","rain","cool","normal","weak","1"},
  6. {"6","rain","cool","normal","strong","0"},
  7. {"7","overcast","cool","normal","strong","1"},
  8. {"8","sunny","mild","high","weak","0"},
  9. {"9","sunny","cool","normal","weak","1"},
  10. {"10","rain","mild","normal","weak","1"},
  11. {"11","sunny","mild","normal","strong","1"},
  12. {"12","overcast","mild","high","strong","1"},
  13. {"13","overcast","hot","normal","weak","1"},
  14. {"14","rain","mild","high","strong","0"}

毫无违和感,不过是加了个日期嘛,可是。。树变成了这样子

treedata2

 

哟,果然日期的信息增益够大了,但是一点都不可爱了嘛!日期是完全正确的符合了数据,但是!这不是明显我们想要的预测。

再有。。上面挖了个坑在建树的27行毫不负责的打了个return,也就是说,只要数据错误就会蹦出来NullPointer一类的,这显然不是一个物理系学生所能容忍的严谨(大雾试验误差三倍也得给他分析成天时地利人和心情不好被妹纸甩了天上有飞碟下月有流星出现的误差,这种误差能不处理?)

于是有个叫C4.5的东西出现了。

困了。。欲知后事如何,请见下回分晓。

“决策树算法初尝之ID3”的一个回复

发表评论

电子邮件地址不会被公开。 必填项已用*标注