Week 12 - AIMA CHapter 16 & Chapter 18
I found Chapter 16 to be an extremely challenging read. The sections that did not discuss mathematical equations I have been able to digest, but the proliferation of the equations has me unable to fully grasp what the authors were actually presenting in the chapter. The discussion of why an agent whose preferences are not transitive is irrational was beautifully laid out. In fact, the entirety of section 16.2 was very understandable for me. While trying to apply some of the ideas from this chapter to my project, I am unsure if utility plays a role for me. I will also say that the slightly philosophical discussion of placing a numerical value on human life was brilliantly laid out. Without truly taking a stand either way, they made the reader understand that when designing certain AI agents, this action may be necessary, and does not necessarily present a moral quandary. Section 16.4 is where I began to have difficulty following the authors. Too many terms were presented too rapidly and used in conjunction with each to allow me to gain anything from the reading.
In Chapter 18, I spent most of my time looking at the decision-tree learning as it seems most applicable to my project. It is also straightforward in principle, although not necessarily in practice. Looking at the authors' restaurant example, it is obvious that a simplistic decision-tree may not make the "correct" decision in many instances. For example, immediately discounting a restaurant that has no patrons seems a little strange to me. I wouldn't pass a restaurant for that reason alone, yet it was at the root of their tree. In applying a decision-tree to my project, I think having the principal diagnosis as the root, and then both secondary diagnoses and procedures as branches makes alot of sense. Since I will be creating a small subset (30-40 codes total), my tree would be large, but not unmanageable.
Project Update Mirrored on the Project page
I have not started my
coding yet, but I have been focusing on how to properly categorize my claims. I believe it would make sense to start with pair-wise
suggestion, then, as more codes in the claim are read into the program, eliminate (or add) potential code suggestions. The process is
generally the same, but takes into account the claim as a whole, rather than the individual codes that make up that claim. I also am
beginning to lean towards Java instead of Lisp. I am far more comfortable in Java, despite having no Swing experience, than I am in
Lisp, and since the end of the semester is a few short weeks away, I believe Java will give me the best chance to successfully complete
my project without running into laguage and syntax stumbling blocks. If you know of a good Swing resource, I would be very interested
in picking it up to help me on my way.
Week 6 - AIMA Chapter 9 (Inference in FOL) & Chapter 10
The section discussing Inference using First-Order Logic was interesting, but I don't feel that I got too much out of it. I felt that the authors tried to very quickly trace the idea of using such logic from Aristotle all the way to Kaufman et al. in 2000. What resulted, in my eyes, was a rambling four pages of various names and theories with very little description of what the significant parts of those theories are. Reading it made me feel like I was reading a grammar school history book (e.g. The Magna Carta was signed in 1215). I believe this was the authors' intent, however, seeing as it is essentially an appendix to the chapter, so this is more of an observation than a critique.
The beginning sections of chapter 10 were the most interesting to me. I like how the authors brought together FOL and Set Theory in a meaningful way, both for illustrating their points and illustrating, essentially, how you would go about programming these concepts into your knowledge base. To make a truly useful, general-purpose AI device, representing objects in the device's world in a meaningul way is vitally important. Using categorization, the knowledge base can be significantly reduced in size since the members of a category all share any properties inherent to that category in the same way that any subcategories of the category share the same inherent properties. This simple concept is very powerful in reducing the cost of accessing the knowledge base. E.g. The device encounters a dog and a cat. Both entities are members of the category mammal and therefore both have fur, mammary glands, and warm blood. The device no longer needs to look up all properties of the dog and all properties of the cat to find the same information.
The final portion of this chapter that I found interesting was the short discussion which stated that to get a meaningful comparison out of two or more qualities, one does not need those qualities to be quantitative in nature. Setting up relationships in the knowledge base is sufficient to complete the comparison. E.g. The midterm is easier than the final. No numbers need to be involved to get a meaningful comparison.
Week 5 - AIMA Chapter 8
Having recently taken a course in Symbolic Logic, this chapter was essentially review for me. I find the idea of using First
Order Logic as a means of filling a knowledge base to be a very logical (hah!) step as it provides many benefits. The key benefit
is one that the book brings up frequently: FOL is concise. It is possible to describe a massive amount of information in one or two
lines of FOL. For example, All Professors Teach Classes would be a very large task in Propositional Logic, but can be summed up in
one line of FOL: Ax Professor(x) -> (Ey Teach(x,y) & Class(y)). Propositional Logic would require a statement
about every professor in the world and exactly what classes they teach.
Furthermore, it is fairly trivial using programming languages such as Java and C++ to accurately portray what a statement in FOL is actually saying. Since FOL uses connectives such as AND and OR in the same way as a programming language, those are extremely trivial. Determining the truth of a predicate-object pair is also trivial with a simple IF statement. THERE EXISTS AT LEAST ONE can be determined by a short-circuiting loop, and ALL can be implemented in a FOR loop testing all objects in the world. Once you have the methods defined for testing the connectives, predicates and qualifiers, slightly more complex functions can easily be implemented to determine inferences. To increase the speed of the functions, it is also possible to build more complex rules of FOL such as the deMorgan's Law as applied to qualifiers into the program to make certain inferences return a result even faster.
Week 1 - AIMA Chapters 1 & 2, PAIP Chapters 1, 2 & 3
The field of Artificial Intelligence covers a wide variety of topics. These topics are also drawn from a large range of disciplines well beyond the scope of simply Computer Science. Philosophy, mathematics, neuroscience, and psychology all play a major role in the field. Of all the components of Artificial Intelligence, the central goals are what caught most of my attention. I find the dynamics of Acting Humanly, Thinking Humanly, Acting Rational, and Thinking Rational to be very interesting.
I found one of the philosopher quotes to be quite amusing. I find it very interesting that ancient and modern philosophers alike have decided that there is no possibility of free will, or thinking for that matter, in animals. I have had a number of pets, and I can find no evidence that they do not think, short memories maybe, but there definitely seem to be strong thought processes.
Along those lines, I think that creating a system that thinks like a human would be a fundamentally bad idea. Humans are very arrogant, and often irrational, creatures. I am uninterested in a perfectionist creating a machine to think identically to a human. I like the idea of thinking rationally much more. As I understand it, thinking rationally simply means that the machine will attempt to act to achieve the best possible outcome for the task(s) at hand. Humans don't do that. If we want to use AI as a tool to make our lives easier, the field should veer away from the human aspects and focus on the rational aspects.
To jump tracks to the other book, I found the tutorials to be exceptionally helpful. It has been a few years since I have looked at Scheme, but the syntax has quickly started returning to me. Perhaps it is the rustiness, but I am having difficulty figuring out where the two languages differ. Is there only a difference in the number of built in functions, or is the difference in how the compilers work? I doubt I will find those answers in either of these books, but perhaps it is a worthwhile side project in my free time (whatever that may be).