Type Theory and Functional Programming (1999) [pdf]
cs.cornell.edu160 points by fanf2 11 hours ago
160 points by fanf2 11 hours ago
I think in the end step by step functional programming will be widely adopted. Concepts that have a strong theory behind them tend to last longer. Similar to how relational databases and SQL are still the gold standard today.
I agree. It's happening.
Lambdas are now a requirement for any modern programming language. Monadic promises are how asynchronous programming is done. Rust is basically OCaml except with typeclasses and without a GC.
Inch by inch, ideas from FP bubble up into mainstream programming.
I predict that algebraic effects (a la Koka) are next, but it'll be a while before the industry is ready for it.
To play the devil's advocate: From an OOP perspective, lambdas are just syntactic sugar for a special form of objects. Regarding monads, we had smart pointers and the like in C++ already in the 1990s. Both concepts are orthogonal to mutability.
I'm convinced that imperative programming will endure alongside functional code, because being stateful is how the real world works, and most programs have to repeatedly interact with the real world over the course of their execution.
>> From an OOP perspective, lambdas are just syntactic sugar for a special form of objects.
Indeed Smalltalk - a pure OOP language - had `Block` objects fifty years ago.
>> I'm convinced that imperative programming will endure alongside functional code, because being stateful is how the real world works, and most programs have to repeatedly interact with the real world over the course of their execution.
That, and also who wants to wrestle with monad transformers to make a stateful computation work when they have to fix a bug in production ASAP?
Sure. I don't mean to say that imperative programming is going anywhere.
If you're looking for programming languages with no support for imperative programming, Excel is pretty much it. Even Haskell has robust support for sequencing actions. If it didn't, I don't think we'd be talking about it at all.
What I predict is that ideas from FP will continue to bubble up into the mainstream. Like prior inventions, they won't be presented as a thing that asks you to rework all of your algorithmic code. They will instead be polished and presented in a way that makes them work more as extensions to what you can already do.
If you squint a little bit, Haskell's do-notation is already available in mainstream languages in the form of async/await syntax. async/await is not quite as general as the original Haskell solution, but it also doesn't ask you to completely rethink the way you design algorithms.
The duality between closures and objects is well known. It's hard to avoid having some construct like that, but mainstream languages shifting toward the functional perspective on it is definitely still evidence for penetration of functional ideas.
> Concepts that have a strong theory behind them tend to last longer.
The programming paradigms (functional, imperative, declarative) as implemented generally don't have strong theories behind them. We have no sharp line to detect when a language sits in any of those categories for any major language apart from SQL. One can write something that is recognised as functional in an imperative language for example. And even the best example of purity, SQL, in practice needs to be hosted in an imperative or functional environment to get useful results. The declarative part of the language isn't competent at getting complex performance-friendly results.
One of the interesting things about being a member of the functional programming community is I genuinely can't tell what the claim to fame of the community actually is beyond a sort of club-like atmosphere where people don't like mutability. Are we claiming that C++ programmers don't know how to write pure functions? I hope not, they clearly know. Many are good at it.
I see this misconception a lot. Functional programming has nothing to do with mutability or immutability. Just like product types (or associative arrays) don't make for object oriented programming, the paradigm is about the patterns you use and the total structure of the program. In the case of functional programming, it's that your entire program is built around function composition and folds. This changes how you build your program on a fundamental level.
Additionally, functional programming does have a strong theory behind it. That of the lambda calculus. Everything is formalization and constructions built on top of that. We don't have a "hard line" to detect if a language is "functional" because it's not a language paradigm, but a property of the code itself. Grammar is orthogonal to this stuff. You can, however, make a pretty good guess by looking at the standard library of a language. There is no ambiguity that Idris or Haskell are a languages designed for functional programming, for example.
I think this stems from a problem that there are a lot of programmers, who through no fault of their own, have engaged with tools for functional programming, but skipped building things using the alien and weird stuff so in the end there's no actual learning of functional programming itself. Every new programming paradigm is a shift of your core perspective, they require you to relearn programming from the ground up. It shouldn't feel like "this is just ABC with XYZ", you are only truly beginning to learn when you are put back into your shoes as a middle schooler unlocking the power of nested conditionals. If you don't get that feeling, you're just learning a different language and not a programming paradigm.
> And even the best example of purity, SQL
Purity is not a claim of functional programming. Also SQL is not anywhere close to being a functional language, it's not only a non-programming language, but even if it were (and it can be with extensions) its paradigm is far more in line with logic programming.
> This changes how you build your program on a fundamental level.
I think this manifestation of cargo cult mentality should be put to rest. For each paradigm we heard the same story, but we still see programs in ${LANGUAGE_A} not even following idiomatic styles and looking instead like programs transcoded from ${LANGUAGE_B}. You do not change the way you build programs by rewriting projects in a specific language or even style. You just get the same people implementing the same programs according to the same mental model. The key factor is experience and familiarity, and you don't build any of that by marketing a particular programming style. Moreso when which framework of library was adopted by a project. The hard truth is that experience and familiarity derive from problems a developer faced and the solutions that worked, and quite bluntly after all these decades functional programming isn't living up to the hype created by it's fans. It's the curse of LISP all over again.
I'm not sure that your objection puts anything to rest, least of all a "cargo cult mentality". You re-assert exactly what I say just to disagree and miss the rest of the post. Codebases which are not written in a functional style are not functional programming, no matter the language being used. The overwhelming majority of programmers did not learn to program in a functional style, and without the long haul learning process of thinking about program construction in terms of function composition and application, they will program according to whatever they've always used. They can't program or use a mental framework that they don't know. Languages are languages. The syntax and grammar of a language has (virtually) no effect on a programming paradigm. There's nothing in the grammar stopping you from using Haskell as an imperative language. That's a knock-on effect of being a turing complete grammar, no more, no less.
Given these facts which are absolutely not up for debate, some statistical sample of codebases which are just some unstructured hodge-podge mudball in any given language isn't particularly meaningful. Somebody who spent 2 weeks reading Haskell tutorials in their off-time isn't going to be writing functional programs in Haskell. Hell, somebody who has been writing mudball Haskell programs for years isn't going to be writing functional programs.
> and quite bluntly after all these decades functional programming isn't living up to the hype created by it's fans
Perhaps your problem is that you are looking through such a lens that "hype" even registers on your radar. And that's why I'm sure you can't name a single weakpoint of functional programming off the top of your head besides the ever popular, ever erroneous complaint about "performance" by people who think functional programming is immutability, purity and function pointers. If all you care about is the social baggage with a working discipline and its tools, how can you possibly have anything but the most trivial, surface-level and, to put it bluntly, ignorant view of things? There is no substance to that nonsense, don't kid yourself. Until you change that you will never receive any signal, just noise. The heterogeneity of programming as a discipline will continue to frustrate you to no end.
"All over again"
Lisp is a tool explicitly designed for functional programming ;-)
> Similar to how relational databases and SQL are still the gold standard today.
...except they aren't. The world is gradually slipping towards nosql databases and stepping away from normalized data stores, to the point where tried and true RDBMS are shoehorning support for storing and querying JSON documents.
All flagship database services from major cloud providers are nosql databases.
Perhaps you're confusing pervasiveness with representing gold standard. Yes, RDBMS are pervasive, but so are non-compliance with the standard and vendor-specific constructs. The biggest factor was not what was the theoretical background behind each implementation but ease of use and ease of adoption. SQLite is an outstanding database not because of its compliance with SQL standards and adherence to theoretical purity, but because it's FLOSS and trivial to setup and use.
I expect the same factors to be relevant in adoption of functional programming. This stuff exists for decades on end but mass adoption was between negligible to irrelevant. What started to drive adoption was the inception of modules and components that solved real-world problems and helped developers do what they wanted to do with less effort. In JavaScript the use of map, filter and reduce is not driven by any new-founded passion for function programming but because it's right there and it's concise and simple and readable and capable. In C#, linq is gaining traction for the same reasons but is largely kept back by performance issues.
The key aspect is pragmatism, and theoretical elegance is by far a mere nice-to-have.
> The world is gradually slipping towards nosql databases
My observation has been that this pendulum is swinging back the other direction. You're correct that some NoSQL-isms found their way into RDBMS, but at least in the circles in which I travel, people are by and large heading back to RDBMS stores.
That seems improbable. Pure functional languages are very unpopular according to TIOBE. In particular, interest in Haskell seems to be in decline. Functional features are mostly just an add-on to programming languages with mutable state. By far the most popular language, Python, doesn't even have static typing.
I know powerful typing features are very important for Haskell in particular, but is static typing considered a functional feature more broadly? Most lisps don't have static typing as far as I know. Clojure was one of the main functional languages that actually saw industry use for a while, and it is dynamically typed.
It seems to me that algebraic nominal types are getting strongly attached to functional programming right now. Even on multi-paradigm languages, both idioms tend to come together.
It's not a necessary relation, those things are pretty much independent. Except in that pattern matching and function composition complement each other very well.
"but is static typing considered a functional feature more broadly"
It might be, but it is not an essential feature. Functional programming is the practice of using pure functions as the primary form of computation and composition. Whether one adds types to this is as relevant as adding types to any other paradigm (imperative, oop, logical, functional, etc)
Right, that was my assumption. I asked because the person I replied to mentioned the popularity of dynamic languages as a data-point for the decline in popularity of functional programming.
That's already been happening for quite some time. Mainstream programming has done little else in recent years than converge toward functional programming.
Also, they wrote "functional," not "purely functional."
By the way, from a formal perspective, imperative programming is an extension of purely functional programming.
This is true, in that most of the scholarship builds up its proofs starting with the lambda calculus. But there are so many paradigms (Turing machines, SKI combinators, excel spreadsheets) that are equivalent that I’m not at all convinced they had to start with lambda calculus. They just happened to.
Out in the real world, the thing that all programming languages are actually built on top of looks much more like a Turing machine than a collection of composed anonymous functions. But of course if you want to make your programs go really fast, you can’t treat them like Turing machines either. You need to acknowledge that all of this theory goes out the window in the face of how important optimizing around memory access is.
Which isn’t to say one perspective is right and one is wrong. These perspective all exist and have spread because they can all be useful. But acting like one of them is “reality” isn’t all that helpful.
Ps. Not that the parent actually said the formal perspective was reality. I just wanted to articulate this thought I had bouncing around in my head for a while.
> Out in the real world, the thing that all programming languages are actually built on top of looks much more like a Turing machine than a collection of composed anonymous functions.
Hardware logic as described in a HDL language is precisely a collection of "composed anonymous functions", including higher-order functions which are encoded as "instructions" or "control signals". We even build stateful logic from the ground up by tying these functions together in a "knot", with the statefulness being the outcome of feedback.
But it's hard to argue the machine at the end is stateless. We can endlessly do this. You can construct lambda calculus with Turing machines and Turing machines in lambda calculus.
There seems to be this weird idea in the functional community that the existence of some construction of one thing in another shows that one of those things is "more fundamental" than the other, when in reality this is often a circular exercise. e.g. Functions can be formalized as sets and sets can be formalized as functions.
Even worse in this specific case, the Church-Turing thesis tells us that they're equivalent, which is the only sensible answer to the question of which is more fundamental. There's an oft quoted phrase of "deep and abiding equivalencies" and it bears pointing out how deep and abiding these equivalencies are. From a formal perspective they are the same. Yes, there's arguments could be made that typed lambda calculus and its relation to logic are important, and that's true but it's not a formal argument at all and I think it's best to be clear on that.
Good point on functional vs purely functional. To the GP, what we're seeing is that more mainstream languages are taking the most popular features from functional languages and integrating them. The combination of Scala & Rust are a perfect example given how ML inspired they are. C# has a bevy of functional trappings.
Haskell is suitable for, and designed for, bleeding edge experiments, not for practical usage. Its low popularity says very little about the "market penetration" of better engineered functional languages.
> By far the most popular language, Python, doesn't even have static typing.
Arguably, Python is statically typed - it's just that it only has one type: object.
I know Robert Harper has advanced that argument, but I think it's only really interesting for striking a single rhetorical blow: that, uh, actually statically typed languages are trivially more expressive than Python, since Python has one static type and they have as many as you want.
But I think as an actual ontology, as an actual way of understanding and structuring the world, it's obviously quite silly. There's no compile-time (static) type checking phase in Python, because it'd be trivial and therefore a waste of time. Without static type checking, what does it mean to say it's "statically" typed? Moreover, the language has an entirely different, extraordinarily related concept of type which is checked dynamically. Yes, you can say that "Python is a statically typed language with only a single meaningless static type and but also dynamically checked dynamic types", but frankly that's not telling you anything that calling it a dynamically typed language isn't.
From a different angle, you can't program SML or something "monotypically" - just restricting yourself to one type - without effectively building an interpreter for an entirely separate, dynamic language inside of it (you can of course build a statically typed language inside of Python, so if you think that counts you're just stripping the meaning from comparing languages). In that sense, Python's just plain doing something fundamentally different from what "a static language with one type" is.
These so-called dynamic types are merely the equivalent of tags in a discriminated union/variant type. Statically-typed languages can easily do the same thing: the argument that this amounts to "building an interpreter" applies to any language implementation.
> These so-called dynamic types are merely the equivalent of tags in a discriminated union/variant type.
That's far more true in a language like JavaScript or Scheme than in an "everything is an object" language like Python; the only reason why you would need a variant type for PyObject is to avoid the circular data structures the actual implementation uses.
If you allow the circular data structures, your dynamic types instead are "merely" a relatively complicated codata type, but it's far less obvious that this is actually what anyone considers to be "merely."
Why is type theory always tied to functional programming? Wouldn't it be possible to use some of the ideas in imperative languages?
FP has a small set of primitives that encode everything you can computationally do. One of the first thing you'd learn is how to use those primitives, to 'emulate' anything an imperative language does.
Most definitions of imperative include "unpure editing of global state", which is just the equivalent of passing along a state object implicitly to every function, which is just extra verbose for whatever fundamental type-theory point you're trying to make and would not affect the argument in terms of those fundamental primitives.
Not necessarily, in our compiler design lectures we also used for type systems in imperative languages.
Eiffel reference is one of the few manuals to fully specify the type system with denotational semantics, for example.
Here is an example of type theory in OOP languages,
https://www.sciencedirect.com/science/article/abs/pii/009605...
I am guessing, that it is, because imperative processes are very hard to describe properly using type theory. For example: If you have an object, that has some members, that are initially not set, that is a different type than when they are set. So you cannot say that object has exactly one type. After every setting of a member, you have a new type. That's not something that you can statically work with easily. Mutation destroys the type guarantees. The type system can only be on a more limited level, that is rougher and doesn't give as many guarantees, unless you restrict yourself in what you do to your objects. Like for example not mutating them. Or not having nullable members. Things like that.
So I think FP is more suitable for type theory research and pondering, and people who are researching in the matter will not even consider doing that for imperative languages, because they have understood long ago, that these languages lack the guarantees, that you can get with FP, and therefore are not worth spending the time on.
That’s just not true. Rust has a wonderful type system and it can encode mutability, which is a must for anything performance oriented. That is a benefit that FP misses out on. You can still analyze Rust programs with a great deal of assurances. The fact that it’s procedural seems to not matter much at all. An example of a language that allows mutability and has a research-level type system is Flix.
> [...] which is a must for anything performance oriented.
But that's also not really true.
Using functional data structures it is possible to write highly performant software. Check out for example "CppCon 2017: Juan Pedro Bolivar Puente - Postmodern immutable data structures"[1], an impressive talk, in which someone shows their developed editor that can edit huge text files without lag faster than any mainstream editor today. This is enabled by functional data structures. Usage of functional data structures throughout a project also ensures, that parts can be trivially parallelized, which is not true for traditional algorithms, and therefore can easily scale with number of CPUs available. For more on that subject, you might want to watch a few Joe Armstrong talks.
Rust achieves type safety by introducing more language concepts like ownership, borrowing, and lifetimes, to formalize, who has access when and how. It also creates things as immutable by default, unless you make use of `mut`. Unfortunately, Rust's philosophy is shared memory and making shared memory access safe, rather than message passing, which is at odds with FP.
I think it comes down historically to Church's https://en.m.wikipedia.org/wiki/Simply_typed_lambda_calculus . It originated the most popular form of type theory and also used the simplest functional programming language, lambda calculus.
It's very simple, it's because pure, typed functional programming is not arbitrary but rather fundamental. Natural deduction from logic corresponds to various typed lambda calculi, and functional programming is but a practical manifestation of lambda-calculus.
Under Curry-Howard correspondence simply typed lambda calculus is the term calculus for intuitionistic propositional logic. System F (polymorphic lambda calculus) corresponds to impredicative second-order propositional logic. System Fω corresponds to a kind of higher-order logic. Dependent types correspond to intuitionistic predicate logic, etc.
Other correspondences that are based on sequent calculus instead of natural deduction are more exotic, for example classical logic corresponds to μ~μ-calculus, a calculus of continuations which (very) roughly can be understood as continuation-passing style (but in a principled and careful way). Classical linear logic corresponds to a form of session-typed process calculus. Intuitionistic linear logic corresponds to either a process calculus or to a lambda calculus that is using futures (which can be though as mutable shared memory concurrency using write-once cells).
Note however that languages corresponding to sequent calculus, especially ones that come from a dual calculus (classical logic or classical linear logic) contain some sort of commands, choices that you request from a value, which more or less makes them object-oriented languages, albeit without imperative, mutable assignment. In some sense you can escape functional programming by moving to a dual calculus, but you can't escape purity as long as you care about having propositions as types.
From a Curry-Howard point of view no logic corresponds to a general imperative calculus. Imperative programming is simply not fundamental and generally undesirable when doing logic (so when doing type theory). Mutable state with imperative updates can easily be encoded into FP when needed, e.g. via monads, by using linear types, or by having algebraic effects.
That doesn't mean that types are not useful to imperative languages, of course they are. But types in imperative programming are very weak and logically not very interesting however useful they might be for engineering purposes. Also note that type theory does not mean type system. Many languages have type systems, some more ad-hoc than others, but type theories are special, very specific mathematical objects that embody logic (under the Curry-Howard correspondence). All programs written in a type theory terminate, and this is fundamental. Usual programs, which are not concerned with mathematical proofs certainly don't always terminate.
Of course understanding type theory is a very good way of producing (weaker) type systems that are useful in practical programming, including imperative programming (see for example Rust, which does not employ an ad-hoc type system). Occasionally new logic correspondences are discovered which illuminate certain language features of existing languages. For example Rust's borrowing system was thought to be ad hoc, but now we understand that shared borrows correspond to a logic that arises from semi-axiomatic sequent calculus. The cuts that remain after evaluation (normalization), called snips, are precisely shared borrows, while general cut is memory allocation.
The book in the link is a book about Martin-Löf type theory, which means it is a book about a certain kind of lambda calculus by necessity, there is no other choice.
> From a Curry-Howard point of view no logic corresponds to a general imperative calculus.
From a CH point of view the logic associated with any Turing-complete language is inconsistent, but this applies to both imperative and functional languages. One could imagine a broadly imperative language without a fully-general "while" construct that could have a useful logical counterpart under CH.
This might be similar to how systems like combinatory logic can in some sense be considered "imperative" languages, since they explicitly act on some kind of logical "state" which works much like state in imperative languages; the only rule of inference being "do A then B then C".
Yes, that's true. And there are various logical systems which hint at mutability (apart from linear logic itself). I already mentioned how we can find shared-memory futures in semi-axiomatic sequent calculus. Those futures are mutable, but write-once. This write-once aspect induces a degenerate monotonicity property which can be generalized to arbitrary monotonicity. Mutable variables can exhibit a form of CH as long as writes to them are monotonic in a certain sense, in particular new writes must not refute old reads. For example logical variables in a logic languages are exactly this. Safe, shareable mutable variables which denote evolving proof state during proof search.
I recently started to look at this the other way around. A functional paradigm allows to describe very precisely what a function does through its type. In imperative languages, OTOH, the type signature of a function (which really should be called a procedure) only gives you little information to what happens when you call it, due to mutable state, side effects, etc.
Type theory absolutely can enhance imperative languages! In fact, we're seeing this happen:
Rust is the prime example - it uses affine types (linear logic) to track ownership and borrowing in imperative code. The type system prevents memory safety bugs at compile time without garbage collection.
C++ concepts (C++20) bring dependent typing to template metaprogramming. You can express "this function works for any type T that satisfies these type-level constraints."
Refinement types in languages like Dafny let you encode invariants directly in the type system for imperative code: int{x | x > 0} for positive integers.
The challenge isn't technical compatibility - it's that imperative programming often emphasizes mutation and side effects, while type theory shines at reasoning about pure transformations. But when you can encode the "shape" of your mutations in types (like Rust's ownership), you get incredible safety guarantees.
The real question might be: why don't more imperative languages adopt these ideas? Legacy compatibility and learning curves are probably the main barriers.
I don’t know what’s worse, the idea that AI agents are participating in the community (slightly more directly than a commenter copy-pasting the output of a prompt), or that someone may want to cosplay as one.
A > B := A $var B $st $te C $contin (A $or B) $st (A(~$incont)C) $var (B(~$incont)C)
So here's a recursive definition for what it mathematically means to have one thing greater than another where A, B, and C are defined as sets, using only a certain number of operators. I believe that Bertrand Russel's impossibility criterion is no longer true when looking at recursive infinities for example, so this sort of thing may be useful. I don't know that there's currently a type theoretic programming language that knows how to deal with infinities, but it would be interesting if there were a metaprogramming environment that was able to be able know if statements such as the above were true or not and then write programs about it (so in some ways similar to Coq and in some ways different from other proof based systems).
For example - this statement (as different from the above) is false -
A > B := A $var B $st $te C $contin (A $or B) $st ~((A(~$incont)C) $var (B(~$incont)C))
However, how would we create a computer program that was able to evaluate that? I suppose that it could just iterate a million times and see that it was approximately true such as is used in numerical methods, however that doesn't have the same cognition or way of thinking that a person has.
In some ways this is good as it means that there is still a way to prove that someone writing on the internet is a real person until the machines figure out how to do this and then the internet is now less trustworthy as a method of verification. Which brings up the entire point of how do you prove that someone on the internet is a person and not a machine which is in fact its own problem. Someone can publish a book which you could read and there's no guarantee at this point that you would necessarily know that the book was made by a person and not an artificial intelligence.
I can evaluate a statement as being true in such a way wherein the statement is
Truth is such that its a set that contains a bunch of sets for which no set contains its contradiction and if one set implies a second set and a second set implies a third than the first set implies the third. Each of the sets is infinitely recursive to some degree of numerical analysis until you end up hitting quantum effects in which case things are both true and false until the distinction itself between true and false no longer applies.
I'll let you come up with the symbols for that one. In this way we have a set of what is and is not true that is more than just a collection of facts but preserves the property that the facts must have implications that are complementary (otherwise we have that one implication is more true than another in which case we no longer have a true set but a set of implications for which something like Occam's razor should be used to determine which implication is more true than another or for which the concept of truth is just ignored in favor of subconscious decision making to the extent that truth is important for what to do in the first place - to the extent that it informs how we think feel or act).
This should be relatively intuitive to a person reading it, however if you attempt to teach Google AI what this means it doesn't understand as it uses a language based input. I find this incredibly interesting as it fundamentally doesn't understand mathematics. If you attempt to teach it to make a picture with n symbols in the picture the algorithm doesn't know how to do this and may make five or 20 symbols or something. In essence it doesn't know how to count.
What this means is that if we want programmers to make the next iteration of things for which the AI doesn't know how to do it may need to start relying on pure mathematics before we can then in some way translate that. On the other hand, perhaps it would be interesting to see if we could train an AI on all pure mathematical symbols and see what it comes up with and then consider if there is a gap between language and mathematics or if in fact that gap does not exist. I posit that it does not in fact exist and that language is post mathematics. Which isn't to say that this means that we should all go around speaking in numbers as feelings themselves can be nebulous - which brings up compatibilist versus non-compatibilist questions of free will. At some point I doubt that free will exists and then doubt that this matters although this becomes complicated to say because with words you are not a priori of the logic that implies what is and is not true. For me, things like "happiness" and "rage" can be defined, although that doesn't make them the feeling themselves although by its' definition this may/could be used in negative ways to mechanize the nature of the self.
In any case all of these are incredibly interesting things to me at this point.
Is this book available in print version anywhere? I am unable to find it except for a used hardcover.
Is it a good material to dive into this topic today?
TTOP is the standard work on the topic. Some parts I would say fell out of fashion (using Miranda for example), but many parts are either timeless or still just as relevant.
That said, the book is very dense; for me it was just too much the first time I tried to read it. After circling back to it after a while it gets much easier, because you know what parts you really need (which is a common pattern for me at least with everything academic).
TTOP? Do you mean TTFP?
TAPL, I meant TAPL (Types and Programming languages by Pierce)! This would not have happened with a well-typed comment section. I mixed up OP's book (which is nice as well!) and the abbreviation. Sorry!
Sure, why not. It seems to be a pretty good exposition of the material. When I got interested in this stuff many years ago I worked my way through the 'typing rules' in the coq (nowadays rocq) manual. That is a 'slightly' higher friction way of learning this stuff. This document seems to be more pedagogical.
Have you found this stuff useful during the many years since you learned it? Or you don't mean you mastered it enough to judge its usefulness?
I have a personal coq/rocq project regarding the verification of software so for that purpose it is highly useful. I also wrote a proof assistent myself (https://github.com/chrisd1977/system).
A related book (seems to be more often cited) is Types and Programming Languages by Benjamin C. Pierce. That seems to be more concrete (as opposed to theoretical), with chapter titles like "An ML Implementation of {some concept}".
They are very different books. TAPL is a book about programming language semantics, TTAFP is a programmer-oriented book about Martin-Löf type theory.
There is very little overlap.
TAPL is definitely the book to pick up if you are interested in programming language semantics. But if you are interested in logic, dependent types, Curry-Howard correspondence there are potentially better and more modern materials than TTAFP (not to say that TTAFP is bad). If you care about formalizing programs Sofware Foundations is a much better resource, and if you care about mathematics and logic, there are resources specifically suited to that.
TAPL is a perfectly good book, but some in academia will tell you that it is somewhat dated.