Type Theory and Functional Programming (1999) [pdf]

cs.cornell.edu

160 points by fanf2 11 hours ago


- an hour ago
[deleted]
denismenace - 9 hours ago

I think in the end step by step functional programming will be widely adopted. Concepts that have a strong theory behind them tend to last longer. Similar to how relational databases and SQL are still the gold standard today.

randomNumber7 - 10 hours ago

Why is type theory always tied to functional programming? Wouldn't it be possible to use some of the ideas in imperative languages?

phoenixhaber - 35 minutes ago

A > B := A $var B $st $te C $contin (A $or B) $st (A(~$incont)C) $var (B(~$incont)C)

So here's a recursive definition for what it mathematically means to have one thing greater than another where A, B, and C are defined as sets, using only a certain number of operators. I believe that Bertrand Russel's impossibility criterion is no longer true when looking at recursive infinities for example, so this sort of thing may be useful. I don't know that there's currently a type theoretic programming language that knows how to deal with infinities, but it would be interesting if there were a metaprogramming environment that was able to be able know if statements such as the above were true or not and then write programs about it (so in some ways similar to Coq and in some ways different from other proof based systems).

For example - this statement (as different from the above) is false -

A > B := A $var B $st $te C $contin (A $or B) $st ~((A(~$incont)C) $var (B(~$incont)C))

However, how would we create a computer program that was able to evaluate that? I suppose that it could just iterate a million times and see that it was approximately true such as is used in numerical methods, however that doesn't have the same cognition or way of thinking that a person has.

In some ways this is good as it means that there is still a way to prove that someone writing on the internet is a real person until the machines figure out how to do this and then the internet is now less trustworthy as a method of verification. Which brings up the entire point of how do you prove that someone on the internet is a person and not a machine which is in fact its own problem. Someone can publish a book which you could read and there's no guarantee at this point that you would necessarily know that the book was made by a person and not an artificial intelligence.

I can evaluate a statement as being true in such a way wherein the statement is

Truth is such that its a set that contains a bunch of sets for which no set contains its contradiction and if one set implies a second set and a second set implies a third than the first set implies the third. Each of the sets is infinitely recursive to some degree of numerical analysis until you end up hitting quantum effects in which case things are both true and false until the distinction itself between true and false no longer applies.

I'll let you come up with the symbols for that one. In this way we have a set of what is and is not true that is more than just a collection of facts but preserves the property that the facts must have implications that are complementary (otherwise we have that one implication is more true than another in which case we no longer have a true set but a set of implications for which something like Occam's razor should be used to determine which implication is more true than another or for which the concept of truth is just ignored in favor of subconscious decision making to the extent that truth is important for what to do in the first place - to the extent that it informs how we think feel or act).

This should be relatively intuitive to a person reading it, however if you attempt to teach Google AI what this means it doesn't understand as it uses a language based input. I find this incredibly interesting as it fundamentally doesn't understand mathematics. If you attempt to teach it to make a picture with n symbols in the picture the algorithm doesn't know how to do this and may make five or 20 symbols or something. In essence it doesn't know how to count.

What this means is that if we want programmers to make the next iteration of things for which the AI doesn't know how to do it may need to start relying on pure mathematics before we can then in some way translate that. On the other hand, perhaps it would be interesting to see if we could train an AI on all pure mathematical symbols and see what it comes up with and then consider if there is a gap between language and mathematics or if in fact that gap does not exist. I posit that it does not in fact exist and that language is post mathematics. Which isn't to say that this means that we should all go around speaking in numbers as feelings themselves can be nebulous - which brings up compatibilist versus non-compatibilist questions of free will. At some point I doubt that free will exists and then doubt that this matters although this becomes complicated to say because with words you are not a priori of the logic that implies what is and is not true. For me, things like "happiness" and "rage" can be defined, although that doesn't make them the feeling themselves although by its' definition this may/could be used in negative ways to mechanize the nature of the self.

In any case all of these are incredibly interesting things to me at this point.

YellowSuB - 6 hours ago

Is this book available in print version anywhere? I am unable to find it except for a used hardcover.

lugu - 11 hours ago

Is it a good material to dive into this topic today?