Well, what you're doing in order to make that work, and
what you end up doing in supervised learning and function approximation
in general, is you make some fundamental assumptions about the world.
Right? You decide that you have a well-behaved function that is
consistent with the data that you're getting. And with that, you're
able to generalize. And, in fact, that is the fundamental problem
in Machine Learning. It is generalization. Now what's behind all of
this, I'm going to claim. Michael, you jump in whenever you disagree.
>> I disagree! Oh,
sorry. Too soon. Go ahead.
>> Is bias.
>> Bias.
>> And in particular, inductive bias.
>> Inductive bias.
>> Right. So all of Machine Learning, or certainly
Supervised Learning, is about induction. As opposed to deduction
>> I see, induction of course being a problem
of going from examples to a more general rule.
>> Right specifics to generalities. By contrast deduction is?
>> Would be the opposite. It would be
going from a general rule to specific instances,
basically like reasoning. Right, in fact a lot of the
AI in the beginning was about productive reasoning about logic,
programming, those sort of things where you have certain rules
and you do only those things that follow mainly in those
rules. For example you have A implies B, that's a
rule in universe and then I tell you A so
if you know A implies B in universe and I
tell you A then you also know. That A implies b.
>> And
therefore, you can.
>> And a.
>> Infer that.
>> B.
>> B, you have a implies b, you have a. That implies b.
>> Okay.
>> It's what we just said. That's deduction.
>> That's deduction but what we just did
was not deduction. Before then when I asked you
1, 1, 2, 4, 3, 9, 4, 16 and so on and so forth, we did induction.
>> That was induction.
>> Induction is more about did the Sun rise yesterday?
>> Yes.
>> Did the Sun rise the day before that?
>> Yes.
>> Did the Sun rise the day before that?
>> [UNKNOWN].
Did the sun rise the day before that?
>> Yes.
>> Yes. So the sun has risen every day. Is the sun going to rise tomorrow?
>> I sure hope so.
>> We all hope so. And we all act
like it does because if it doesn't, then there are
a whole bunch of other things we ought to be
doing besides sitting in this studio and having this interview.
>> I think we should warn the plants.
>> [LAUGH] I don't think the plants are going to care.
>> They are! They really need sun.
I think we all need some, Michael. So the idea there
is that induction is crucial, and that inductive bias is crucial.
And we'll talk about all this in the course. But that's
kind of fundamental notions behind supervised
learning and machine learning in general.
>> I agree with that. Yeah.
>> All right, so we're on the same page. So that's supervised learning.
Supervised learning, you can talk about in these high muckety-muck ways. But at
the end of the day It's function approximation. It's figuring out how to
take a bunch of training examples, coming in with some function that generalizes
beyond the data you see.
>> So why wouldn't you call it function induction then?
>> Because someone said supervised learning
first. Well there is a difference.
>> No, no, no, no you said supervised learning is
function approximation and I want to
say supervised learning is. Function induction?
>> As opposed to function approximation?
>> Yeah.
>> Okay. It's
>> Approximate function induction.
>> Or induction of approximate, of.
>> Approximate functions.
>> Something like that, yeah.
>> You don't want to induce
an approximate function, you want to induce the actual function.
>> Yeah, but sometimes you can't.
>> Yeah.
>> Because sometimes you think it's quadratic, but it's
not. I have that as a plaque on my wall.
>> You do?
>> No.
>> Yeah, I didn't think so. Okay, so that's supervised learning.
教師あり学習と関数近似において
皆さんが行っているのは
一般的には世界についての基本的な仮定です
持っているデータと一貫性のある
すばらしい関数を導入するのです
しかしそれを一般化すると
機械学習の根本的な問題となります
汎化という問題です
この背景についてお話ししますが
納得がいかない時はすぐ質問してください
質問です 早すぎました
まずバイアスです
特に帰納的バイアスです
帰納的バイアス
機械学習の中でも教師あり学習は
特に帰納的なのです
演繹法の逆です
つまり帰納法というのはもちろん
具体例から一般の法則を導くのですね
そう 個々から全体です
では演繹法とは?
正反対に一般の法則を基に個々の例を説明する
推論のようなものです
初期の人工知能の多くは
プロダクションシステムによる推論や
論理プログラミングを利用していました
つまりいくつかの規則があり
その規則から導かれることだけを考えるのです
例えば“AならばB”という宇宙の法則があります
私はあなたにAと伝えます
あなたは“AならばB”という宇宙のルールを
知っているので私がAと言えば?
“AならばB”です
そこであなたは推論できますね
Aは
何ですか?
Bです
“AならばB”だとを知っていれば
Aという情報はBになるのです
はい
すなわちこれが
演繹法です
確かに演繹法です
この話の前にあなたに聞いたのは
1ー1、 2ー4、3ー9、4ー16などの帰納法でした
あれは帰納法でしたね
帰納法とは“昨日太陽は昇ったか”と
聞くようなものです
はい
その前の日も太陽は昇りましたか
はい
さらにその前の日は?
はい
さらにその前の日は?
はい
つまり太陽は毎日昇ると言えます
明日も太陽は昇るでしょうか
そう願います
誰もがそう信じて行動しています
もし太陽が昇らなければ大変なことになります
私と話している場合ではないですね
植物に警告しないと
植物の方は気にしないでしょう
気にしますよ 植物には太陽が必要です
人間にもですね
このように帰納法は非常に重要ですが
帰納的バイアスも同様です
これについてはあとで説明しますが
教師あり学習と機械学習の背景となる
基本的な概念です
そうですね
考えが一致しました
これこそが教師あり学習です
教師あり学習ではこのような話し合いも大切ですが
最終的には関数近似だと言えます
それは訓練例から関数を学習し
未知のデータに適用できるように汎化することです
それは関数の帰納法ではないのですか
教師あり学習です
待ってください
あなたは教師あり学習は関数近似だと言いましたが
私は関数の帰納法だと言いたいのです
関数近似ではないと
はい
分かりました では
近似的な関数帰納法ですよ
もしくは近似関数の帰納法です
近似関数?
そんな感じです
近似関数ではなく実際の関数を導きたいのでは?
できることなら
そうですか
2乗する関数だと思っても違うこともあるわけです
その教訓を額に入れてあります
本当に?
ウソです
とにかくこれが教師あり学習です