Always be writing
This setup was supposed to help me write, but it’s not doing that very successfully. I think even with all the allowances I give myself for incomplete sentences etc., there’s still some overhead even in creating a post, choosing a title / excerpt, and finally, in committing it to the repo.
Right now it’s like email or IM; needs to be like talk
.
To take that choice out of the loop, I’ll probably start dumping everything in this file, and extract things when they are reasonably complete.
Sticking this at some random date in the middle, hope no one notices.
Table of contents
- Manjul HT
- Snooping on DEK archives
- Exercises in programming style
- Arxiv.org bulk API
- Proving that a number is prime
- Literature and “feelings”
- Bookmarks
- Working with files in JavaScript
- Sanskrit conjuncts (consonant clusters) by frequency
- Art and “kitsch”
- More TeX links
- CS and interviews
- Devamārga
- CTAN mirror stuff
- Angles are easier to approximate from above
- TikZ notes
- Simple time logger
- More
- CSS Notes, round 2
Manjul HT
An article on the mathematician Manjul Bhargava, who won the Fields Medal in 2014, from 1997 when he was in his first year of grad school — in Hinduism Today: https://www.hinduismtoday.com/modules/smartsection/item.php?itemid=4868
A bit of background: Manjul Bhargava was born in Canada, but grew up in Long Island, New York, where his mother Mira Bhargava was/is a mathematician at Hofstra University. He spent summers in India, where his grandfather, P. L. Bhargava, was a noted Sanskritist (HOD of )
Fn: https://news.hofstra.edu/2014/08/14/prof-mira-bhargavas-son-wins-top-math-prize/ https://twitter.com/hofstrau/status/499945019009429505
out of the boy! Confident and proud of his Hindu culture and identity, this young Ontario-born son of a chemist father and mathematician mother has proven that he is truly Aryabhata’s descendent in mind and spirit. This January he received the prestigious Frank and
Unlike in India where merely affirming one’s Hindu background is almost taken to be a political act, it seems possible elsewhere to have a more straightforward appreciation of one’s inheritance.
Though with other challenges…
The family’s staunch vegetarianism did cause some minor clashes in elementary school, but Manjul stood his ground: “Sometimes kids I ate with would make fun of me for not eating meat–‘You’ve never had a hamburger in your life?’ they’d ask incredulously. I would remind them that what they were eating were slices of dead cow and pig, and I’d relate cruel and gruesome stories of the slaughterhouse to them. This generally grossed them out enough to never make fun of vegetarianism again. In fact, afterwards many of them stopped eating meat altogether–at least in front of me!”
Namely: one needs inoculation against many memes / forms of peer pressure; one needs to be much self-assured and confident. Where does that come from?
He cheerfully admits: “I never really liked going to school, and so I rarely attended. Instead, I spent most of my childhood biking, playing tennis and basketball with neighborhood kids, writing, flying kites, reading recreational math books, and learning to play the sitar, guitar, violin and the tabla.”
Manjul, the winner of the First Annual New York State Science Talent Search, almost didn’t graduate because of his carefree inclination to skip classes that couldn’t teach him anything he didn’t know already. After all, he had completed all his high school’s math and computer courses by ninth grade! Still, he did manage to graduate–as the class valedictorian, no less.
Self-assurance / confidence, extending to knowing that education is much more than schooling (and definitely more than the games one plays to get a job or whatever), seems very important.
The last paragraph of this article says:
He is also keenly interested in linguistics in which he has published research work. It was his grandfather, a linguistics scholar, who taught him Sanskrit and developed his interest in linguistics.
Wonder what this is?
https://www.hinduismtoday.com/modules/smartsection/item.php?itemid=4868
https://www.thehindu.com/opinion/op-ed/fields-medal-winner-manjul-bhargava-hope-indian-youth-take-up-research-in-sciences/article6312471.ece
https://thewire.in/history/india-has-to-be-its-own-cultural-ambassador-but-it-has-to-be-scientific-about-it-manjul-bhargava
https://paw.princeton.edu/article/play-fields-math
https://www.princeton.edu/~hindu/about.html – “faculty advisor”
http://contrarianworld.blogspot.com/2016/06/manjul-bhargava-sanskrit-indian.html (someone’s blog post… seems to be “Aravindan Kannaiyan”, a Christian, anti-Brahmin etc, but with a hint of openness and understanding, or at least reasonableness)
https://www.indiatoday.in/india/story/fields-medal-manjul-bhargava-maths-nobel-reaction-indians-204049-2014-08-14
https://en.wikipedia.org/wiki/Manjul_Bhargava
https://www.thehindubusinessline.com/news/professor-of-permutations-percussion-and-poetry/article7457162.ece
http://www.indictoday.com/reviews/being-hindu/
Google search for [manjul bhargava reading gauss]:
- https://paw.princeton.edu/article/play-fields-math
- https://www.mathunion.org/fileadmin/IMU/Prizes/Fields/2014/news_release_bhargava.pdf
- https://plus.maths.org/content/conversation-manjul-bhargava
- http://ieee.scripts.mit.edu/urgewiki/images/2/25/Higher_composition_laws.pdf
- https://www.quantamagazine.org/number-theorist-manjul-bhargava-is-awarded-fields-medal-20140812/
Google search for [p l bhargava]:
- http://plbhargava-conference.org/bio.htm
http://www.heidelberg-laureate-forum.org/blog/laureate/manjul-bhargava/
https://www.dnaindia.com/india/report-maths-wizard-manjul-bhargava-s-kin-in-kerala-elated-2010838
https://www.thehindu.com/todays-paper/tp-national/tp-kerala/payyannur-connection-of-fields-medal-winner/article6315148.ece
http://www.rediff.com/getahead/report/achievers-math-research-is-not-considered-a-reasonable-career-option/20140904.htm
https://timesofindia.indiatimes.com/nri/us-canada-news/PIO-with-mastery-over-Sanskrit-and-tabla-wins-Math-Nobel/articleshow/40199221.cms
https://www.deccanherald.com/content/453907/award-winning-us-prof-teach.html
https://economictimes.indiatimes.com/industry/services/education/manjul-bhargava-to-lead-prime-minister-narendra-modis-teach-in-india-programme/articleshow/45935553.cms
http://www.towntopics.com/wordpress/2018/09/05/princeton-professor-promotes-math-and-magic-at-nyc-museum/
https://www.thehindu.com/news/national/govt-jettisons-scientific-advisory-panels/article24803494.ece
Snooping on DEK archives
Well it’s on the public internet…
Sort by datetime.
1973: Some experiments with some Lisp-like language, and with SAIL, and a letter (by wife)
1974: More experiments with SAIL (more sophisticated), probably a memory dump of Greenblatt chess program,
1975: some “failed.txt” which are probably emails that didn’t get sent (IIRC mutt also had something like that)
1976: ART(ACP) errata.
1976-03-21 21:33 TEST .ART [ 1,DEK] 1 4275 – hmm interesting test data. What for?
1976-04-28 10:29 TEMPO .ART [ 1,DEK] 1 3510 – similar
1976-05-15 09:05 TEST .ART [ART,DEK] 1 4475
Lots more errata etc. Also see a .XGP file. (“Errata et Addenda May 14 1976”)
1976-05-23 14:27 BIGART.XGP [ART,DEK] 1 84910
1976-06-10 01:18 DON .SAI [ART,DEK] 1 33920 – looks like something that compiles to PUB code.
1976-09-21 23:49 DON .SAI [ART,DEK] 5 34560 – later version of above.
My guess is, this program converts from the “.MAS” files (see around there) to .PUB files? Some custom syntax to save typing?
Lots of errata etc files, all of these are interesting, but it would be good to first know what they are.
1976-10-29 11:11 SQRT .SAI [ 1,DEK] 1 7680 – “Floyd’s square root problem”
Some christmas letters…
1977-01-17 09:54 OPTBP .SAI [ 1,DEK] 1 2560 – optimal boolean eval
1977-03-29 13:54 JFR .POX [ 1,DEK] 1 101120
– This is something interesting and I’ve seen nowhwere discussed. It appears that JFR = John Fredrick Reiser, one of DEK’s PhD students. (Thesis: “Analysis of Additive Random Number Generators”, 1977) (Also the author of this about SAIL: https://dl.acm.org/citation.cfm?id=892120) And it appears that the “POX” system used was developed by REM (“I also thank Robert Maas for developing the typographical software”), and possibly invented the idea of a “brick” character. (There are POX manuals in other users’ archives e.g. https://www.saildart.org/HOW.ALS[UP,DOC]7 .) Surely must have influenced TeX somewhat? The picture of the POX author post this period (what you can find on the internet) doesn’t seem too great, sadly.
This by McCarthy mentions POX: http://www-formal.stanford.edu/jmc/office/office.html
See also www1.cs.columbia.edu/~kar/pubsk/seminar.ps –
Tuesday January 31 (1989 or 1990?) Before the start of class DEK presented certificates typeset with a historic system called POX to the trivia hunt participants. He noted that this may well be the last time that POX would ever be used since it will disappear when the machine SAIL is decommissioned
BTW, compare this date with that of the galleys!
1977-04-22 19:38 BELCOR.BAI [ 1,DEK] 1 9615 – not sure how to read this, but possibly interesting.
1977-04-25 18:01 BELFST.SAI [ 1,DEK] 1 2560 – “convert Belfast tapes to ASCII”
1977-04-27 11:58 CORR .ASC [ 1,DEK] 1 11520 1977-04-27 12:00 VOL2A .ASC [ 1,DEK] 1 175360 1977-04-27 12:03 VOL2B .ASC [ 1,DEK] 1 126720
– seem to be either dumps or actual souce of the entire text? In definitely some weird format though.
1977-04-27 13:51 BELCOR.SAI [ 1,DEK] 1 6400 – possibly companion to BELCOR.BAI above
1977-05-13 04:59 TEX1 . [ 1,DEK] 1 65280 1977-05-13 04:59 TEX1 . [ 1,DEK] 2 65280 1977-05-13 04:59 TEXDR .AFT [ 1,DEK] 1 65280
Aha, here we begin!
1977-05-26 00:17 XGPIT .SAI [ 1,DEK] 1 1920 1977-05-26 00:20 SPLINE.SAI [ 1,DEK] 1 19200
– I think these show that the fonts were more interesting.
Exercises in programming style
Started reading this book today. What an excellent idea! I encountered this book through a review. (I think this one, via this typically pointless discussion. There seems to be a video but I haven’t watched it.)
The problem it calls “term frequency”: given a text file, print out the N=25 most common words from it, along with their frequencies.
As in the review, I decided to try solving the problem myself, before reading the examples.
The approach that comes to mind, and what I think of as the “natural” way one (or just I?) would write this program today:
- Keep a map (dictionary) from words to counts: Given the text file, go over each word, incrementing its count in the dictionary.
- After this pass over the text file, you have the dictionary. Make a pass over the dictionary, sorting by value (the count). (Maybe keeping only the top N.)
- Print the set finally, in descending order.
Before I thought of actually writing the code, I recalled that something similar (perhaps the same problem?) was the exercise that Bentley gave to Knuth for writing a literate program, and the “review” by the Unix advocate (Doug Mcillroy IIRC?). So I could also write:
cat file.txt | tr ' ' '\n' | sort | uniq -c | sort -nr | head -n 25
or something like that.1 (Further tweaks to output if necessary.)
When I actually thought of writing down Python code, though, a further shortcut suggested itself:
from collections import Counter
c = Counter()
for line in open('file.txt').readlines():
for word in line.split():
c[word] += 1
for (word, count) in c.most_common()[:25]:
print('%s - %s' % (word, count))
There are a bunch of assumptions implicit in all three approaches mentioned so far, which the prologue of the book (where the problem is first properly defined) shows are not actually correct:
- It is wished to ignore capitalization, and
- ’to ignore stop words like “the”, “for”, etc.’
There are many decisions that need to be made if one actually starts writing a program (notwithstanding the fact that we already wrote two programs without making any of these decisions explicitly):
- The first is “what is a word?”
- The two programs above assumed, more or less, that words are that which are separated by spaces: roughly, that a word is a maximal sequence of non-space characters. (But even they differ when a line ends or starts with a space.)
- Is this right? What about punctuation, say apostrophes? Surely “doesn’t” shouldn’t be split at the apostrophe. It seems we want “can’t” and “cant”, or “won’t” and “wont” to be different words after all—definitely “its” and “it’s” aren’t the same.
- But in case of other punctuation, like dashes or full stops (periods), we probably shouldn’t count them as part of the word: in the previous paragraph, the final “same.” isn’t a word (it’s only “same” that is a word), neither is “all—definitely” a single word.
- What if a line ends with a hyphenated word? Can this happen, and should we detect this and join with the part on the next line?
- Some linguists will sometimes say that things like “post office” are words; should we care?
- What does it mean to normalize for capitalization?
- Should we convert everything to lowercase?
- Or uppercase?
- Or perhaps the “most common” case of the word should be retained, so that “I” can remain “I”?
- Are “us” and “US” really the same word?
- Perhaps we should also keep track of the actual distribution of different cases that a word occurred in, so that if we later decide we need to distinguish these, we don’t have to re-read the input file?
- What is meant by ’to ignore stop words like “the”, “for”, etc.’?
- What is “etc.”? What are the other stop words?
- When should we ignore them: when reading the file (as early as possible) or perhaps not even in the final output (as late as possible), leaving it to the user to decide which ones are interesting or not?
- The
Arxiv.org bulk API
Generic API: https://arxiv.org/help/api/index
Bulk data S3: https://arxiv.org/help/bulk_data_s3
https://github.com/acohan/arxiv-tools
Proving that a number is prime
Imagine I am trying to convince you that a particular (large) number is composite (i.e. not a prime number). To make this more vivid, imagine that you are the “customer” and I am trying to “sell” you a composite number.2
If I just give you the number and say “believe me, this is composite; check for yourself”, that might require an unfeasible amount of work on your part (so you may not be too happy about buying this number from me). For example, is the number $224919749317807621056963908336317800061643423036754378897719$ composite?
Instead (if I knew everything about the number), when giving you the number, I could also give you an easy way to verify that the number is composite: to prove that a number $N$ is composite, I just have to give you a number $d$, such that $1 < d < N$ and $d$ divides $N$.
If the number $N$ is $L$ digits long, then I can give you such a $d$ having at most $\lceil L/2 \rceil$ digits, and you have to perform only a small computation: dividing an $L$ digit number by an $\lceil L/2 \rceil$-digit (or smaller) number. This requires not much computation from you, nor much mathematical knowledge beyond the definition of a composite number, and how to do division. The “certificate” is short as well.3
Now imagine instead, for the rest of this post, that I’m trying to convince you that a particular large number is prime. This time, I’m trying to prove the non-existence of divisors, and it is less obvious whether I can give you a short proof that the number is prime.4
This was first shown in 1975 by Vaughan Pratt.5 The number-theoretic core of the idea is an old one, dating back to Lucas in the mid-19th century, and D. H. Lehmer in 1927. The certificates (short proofs of primality) generated by this method are known as “Pratt certificates”, and that’s what I’ll discuss in the rest of this post.
Before discussing them in detail, I should mention two later developments have superseded it:
- A later development is “elliptic curve primality proving”: Wikipedia 1, Wikipedia 2, paper 1. This is almost universally what is used today, such as by the program Primo by Marcel Martin, considered the gold-standard of primality-proving programs. For more on the theory and practice of verifying Primo certificates, see here, here, here, here, here.
- Now we know, thanks to the celebrated work of Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, that “$\mathrm{PRIMES}$ is in $\mathsf{P}$” (see 1, 2, 3, etc.) — i.e. that if I simply give you the prime number and no proof, you can (in principle) verify yourself that it is prime, with a polynomial-time computation. So (at least if you don’t care about anything beyond “polynomial-time or not”), the entire question of certificates is moot. But this AKS algorithm is not used in practice (ECPP is still used), and in any case not trivial, so beyond the scope of our discussion. (While I was writing this post, this was posted.)
Pratt certificates are not practical for very large primes (over 100 digits say), because in order for me to find the certificates, I would have to factor some large numbers. (In that sense, they are even more impractical than AKS — in fact we don’t even know how to find them in polynomial time.) So why discuss them at all? Apart from the historical interest, the great thing about Pratt certificates IMO is that, for you, as the customer, they are easy to understand: the mathematics required can be explained to a high-school student with no number-theory background, from first principles.
It rests on the following:
Theorem: Given two numbers $p$ and $a$, suppose that $p$ divides $a^{p-1} - 1$, and for every prime number $q$ that divides $p-1$, it so happens that $p$ does not divide $a^{(p-1)/q} - 1$. Then $p$ is prime.
To prove this theorem, first we will need a very tiny part of modular arithmetic.
Lemma 1 Given a number $p$, let’s call any number $n$ a “unit mod $p$” (or simply a “unit”) if $p$ divides $n - 1$. Suppose $x$ is a unit. Then $xy$ is a unit if and only if $y$ is a unit.
Proof of Lemma 1: Note that $xy - 1 = (x-1)y + (y-1)$. As $p$ divides $x-1$, it divides the first term on the right-hand side namely $(x-1)y$, so it divdes the left-hand side if and only if it divides the other term $(y-1)$. This proves Lemma 1. $\Box$
Lemma 2: Let $r$ be the smallest positive number $x$ such that $p$ divides $a^x - 1$. Then $p$ divides $a^n - 1$ if and only if $n$ is a multiple of $r$.
Proof of Lemma 2: Suppose $n = br + c$, where the remainder $c$ satisfies $0 \le c < r$. Then note that, as by assumption $a^r$ is a “unit”, so is $(a^r)^b$ (by repeated application of Lemma 1), and therefore (by Lemma 1 again) we see that $a^n = (a^r)^b a^c$ is a unit if and only if $a^c$ is a unit. For this to happen, as $r$ was defined as the smallest positive number $x$ for which $a^x$ is a unit, $c$ cannot be a positive number, i.e. $c$ has to be $0$ — or in other words $n$ has to be a multiple of $r$. $\Box$
With these two proved, we can now give a proof of the theorem.
Proof of Theorem:
- Let $r$ be the smallest positive number $x$ such that $p$ divides $a^x - 1$. (As we are given that $p-1$ is such a number $x$, we know that $r$ is at most $p-1$. In fact, we will show that $r$ is exactly $p-1$.) As $p$ divides $a^{p-1}-1$, this means (by Lemma 2) that $(p-1)$ must be a multiple of $r$, say $p - 1 = kr$. Now,
- if $k > 1$, then by picking $q$ to be any prime factor of $k$ (possibly $k$ itself, if $k$ is prime), we see that $(p-1)/q$ is also a multiple of $r$, and therefore (by Lemma 2 again) $a^{(p-1)/q}$ must also be a unit. But we’re told that this is not the case.
- Therefore $k$ must be $1$, i.e. $p - 1$ must be the smallest positive number $x$ such that $p$ divides $a^x - 1$.
- Now, consider the numbers $1, a, a^2, a^3, \dots, a^{p-2}$. There are $p-1$ of them. Consider the difference between any two of them, say $a^n-a^m$ where $n > m$. This is $a^{m}(a^{n-m} - 1)$, and here $p$ cannot divide the first factor (as $a$ is relatively prime to $p$) nor the second (as $a^{n-m}$ is not a unit, by what we just proved: note that $0 < n - m < p - 1$). This means that $p$ does not divide $a^m - a^n$, i.e. the two numbers $a^m$ and $a^n$ leave different remainders when divided by $p$. As this is true for any pair of numbers $m$ and $n$, this means that all of the $p - 1$ numbers we considered must leave distinct remainders when divided by $p$, and therefore these remainders must be $1, 2, 3, 4, \dots, p-1$ in some order.
- If $p$ were not prime, i.e. if it had a factor $d$, then it would never be possible to have a number $a^n$ leave a remainder of $d$ when divided by $p$ — that would mean that $d$ divides $a^n$ so it would also divide $a^{p-1}$ so it (and therefore $p$) could not divide $a^{p-1} - 1$. So $p$ must be prime. $\Box$.
These then are the ideas that go into a Pratt certificate. With a Pratt certificate, I give you the following:
- I give you the number $N$, which I’m claiming to be prime.
- I give you a number $a$, such that you can check that $N$ divides $a^{N-1} - 1$,
- I give you the complete prime factorization of $N - 1$, such that you can check that for any prime $q$ that divides $N-1$, if you look at $a^{(N-1)/q} - 1$, then it is not divisible by $N$.
- I give you proofs that each of the prime numbers I gave in the prime factorization of $N-1$ is indeed prime — and these proofs I give using smaller Pratt certificates themselves!
You can read more in Wikipedia and Pratt’s short paper.
Examples
Software
I am not aware of software for finding these Pratt certificates. The problem is that to prove a large number $p$ prime, we need to factor $p-1$, which is also a large number.
There is something close, in pari
, when you invoke isprime
and pass 1
as the second argument. It is documented here (see also here, here, here) as:
This returns the coefficients giving the proof of primality by the p - 1 Selfridge-Pocklington-Lehmer test
— I think I figured it out from here (and a bit from here, also badly written on Wikipedia): instead of showing only that $a^{(n-1)/q} \not\equiv 1 \pmod N$, i.e. that $N \nmid (a^{(n-1)/q} - 1)$, we show that $N \perp (a^{(n-1)/q} - 1)$ — using here $x \perp y$ to mean that $\gcd(x, y) = 1$ — and in return we only have to prove it for primes in some “factored part” of $(N - 1)$ that is at least $\sqrt{N}$.
Example:
pari.isprime(10000000000000000369475456880582265409809179829842688451922778552150543659347219597216513109705408327446511753687232667314337003349573404171046192448274699, 1)
[2, 2, 1; 773, 2, 1; 3430787131, 2, 1; 177830900711, 2, 1; 5736810279563022496918256153681231, 2, [2, 7, 1; 5, 2, 1; 11, 2, 1; 719090842404611, 2, 1]]
— this took something like 4 hours for a 155-digit prime.
I’m not sure how to read this, but here are the facts:
-
With N = 10000 00000 00000 00369 47545 68805 82265 40980 91798 29842 68845 19227 78552 15054 36593 47219 59721 65131 09705 40832 74465 11753 68723 26673 14337 00334 95734 04171 04619 24482 74699 (spaces for clarity), the factorization of N - 1 is [2 , 773 , 34307 87131 , 17783 09007 11 , 57368 10279 56302 24969 18256 15368 1231 , 18480 72574 75428 38412 29608 79808 05424 74912 36118 10555 00369 77261 35763 76775 78396 96558 97497 97900 37414 03]
-
We can check that $2^{(N-1)/2} \not\equiv 1\pmod{N}$, that $2^{(N-1)/773} \not\equiv 1$, that $2^{(N-1)/3430787131} \not\equiv 1$, etc. Try the following in Sage:
fs = [2, 773, 3430787131, 177830900711, 5736810279563022496918256153681231, 1848072574754283841229608798080542474912361181055500369772613576376775783969655897497979003741403] product = 1 for f in fs: product *= f product == N1 - 1 for f in fs: print 2.powermod((N1 - 1) / f, N1) != 1
-
Finally, we can generate similar certificates for each of those primes:
for f in fs: print f print pari.isprime(f, 1)
gives
2 1 773 [2, 2, 1; 193, 2, 1] 3430787131 [2, 2, 1; 3, 2, 1; 5, 2, 1; 89, 2, 1; 142771, 2, 1] 177830900711 [2, 7, 1; 5, 2, 1; 19, 2, 1; 257, 2, 1; 479, 2, 1; 7603, 2, 1] 5736810279563022496918256153681231 [2, 7, 1; 5, 2, 1; 11, 2, 1; 719090842404611, 2, 1] 1848072574754283841229608798080542474912361181055500369772613576376775783969655897497979003741403 [2, 2, 1; 3, 5, 1; 233, 2, 1; 13523, 2, 1; 38377, 2, 1; 11761095715573, 2, 1; 371860136940003389, 2, 1]
10000 000000 000000 369475 456880 582265 409809 179829 842688 451922 778552 150543 659347 219597 216513 109705 408327 446511 753687 232667 314337 003349 573404 171046 192448 275666 (155 digits) = 2 × 3 × 93581 × 782764 541753 × 22752 541933 520162 611588 132845 788925 987350 684817 349828 720701 506059 412842 231831 122206 814184 277933 099371 796471 926801 319705 877117 230314 257127 (137 digits)
Literature and “feelings”
In an article on Nautil.us titled “Why Doesn’t Ancient Fiction Talk About Feelings?”: Literature’s evolution has reflected and spurred the growing complexity of society, the author (Julie Sedivy) makes the following points.
Nevermind, cut out the text that was here… left a comment there instead —
There appear to be two currents in this article:
Part of the article, possibly its main intended point, is about the psychological effects of reading — what happens to you when you read fiction that makes you think of others’ state of mind. Most of the references are about this, and even the title of the page (if you look up at your browser’s top bar) is actually “Why You Should Read Fiction” (perhaps this was the author’s originally intended title). This part is interesting and intriguing—of course as with all psychological research it is to be treated cautiously until there is sufficient replication, but at least the thesis is somewhat plausible. (Though as to the specific point of mental states being explicit in the text, the final reference says the opposite: that it’s better for them not to be.)
The rest of the article (including the headline, probably chosen by the Nautilus editors) seems to make a claim about non-“modern” literature in general, and proceed to ask why it is different in a certain way. (Note by the way that the actual article itself nowhere uses the word “ancient”—that occurs only in the headline and a caption featuring a different King Harold, neither of which was likely supplied by the author—and uses only “medieval”.) Here, I think there are many problems.
For one thing, ancient and medieval literature do talk about feelings, as in many examples given in the comments here. (Even more if you include non-“Western” literature.) Even the article itself includes a quote that in medieval or classical texts, “people are constantly planning, remembering, loving, fearing”, so the mental states clearly existed there literature too.
But let’s say we accept that the claim is true to some extent (e.g. that in non-modern literature one is unlikely to find 12 pages about a boy at a swimming pool). Then there’s an actual question to be answered: why this difference?
The theory that earlier people had less “social intelligence” seems too self-congratulatory. Note that the final section of the article says that having mental states not be called out explicitly (as is claimed about earlier literature) might hone mentalizing skills better, and happens when the author has confidence that the reader can pick them up from clues. These actually point in the opposite direction!
“The past is a foreign country; they do things differently there” — and when we encounter a foreign culture and notice differences, there is a tendency to pathologize the differences: try to explain how they show the superiority of our own culture. Instead, we can take the opportunity to re-examine some of our implicit assumptions.
For example, what is the goal of literature? The Sanskrit theorists held that the highest goal of all literature (including poetry and drama) was “rasa”: evoking in the reader an essentialized aesthetic experience of emotion. There is not enough room here to describe rasa theory (you can look up Wikipedia or follow some random links from a blog post here: https://bit.ly/2R4oiYA), but suffice it to say that the pre-modern authors had a lot of sophistication about the mental states of the reader. If authors regard literature as seeking to provide genuine pleasure (or a transformative experience, or whatever) to the reader, rather than something like (say) exhibiting the author’s sophistication, then that goes a long way in explaining why some non-modern literature may seem less “sophisticated”. This seems a better answer IMO: authors write differently when they aim at different things. And in fact there are many diverse pointing in this direction, that many of the non-moderns were actually the experts: from complaints about the present-day prosing of poetry and about literature written for the writer’s enjoyment rather than the reader’s, to the author’s own experience related at the end of this article, of how a story with no explicit mental states or incursions into anyone’s consciousness nevertheless “provoked an empathic response strong enough” (rasa!) and produced an effect that is “deeply moving”.
Bookmarks
Saving some links so that I can close tabs.
The legal background of Sabarimala
(Gave up reading — on the real contentious points takes unexamined stances axiomatically; all the argumentation is elsewhere.)
- https://indconlawphil.wordpress.com/2016/04/13/sabrimala-key-constitutional-issues/
- https://indconlawphil.wordpress.com/2017/10/13/asking-the-right-questions-the-supreme-courts-referral-order-in-the-sabarimala-case/
- https://indconlawphil.wordpress.com/2018/07/29/guest-post-the-essential-practices-test-and-freedom-of-religion-notes-on-sabarimala/
- https://indconlawphil.wordpress.com/2018/09/28/the-sabarimala-judgment-i-an-overview/
- https://indconlawphil.wordpress.com/2018/09/29/the-sabarimala-judgment-ii-justice-malhotra-group-autonomy-and-cultural-dissent/
- https://indconlawphil.wordpress.com/2018/09/29/the-sabarimala-judgment-iii-justice-chandrachud-and-radical-equality/
More TeX links
- The paper: http://eprg.org/G53DOC/pdfs/knuth-plass-breaking.pdf
- IEEE Annals of the History of Computing: see special issues on Desktop publishing, 2018 issue 3 and 2019 issue 2. Extras here: https://history.computer.org/annals/dtp/
- The TeX-related article has its own “Extras” page: http://tug.org/pubs/annals-18-19/ → http://tug.org/pubs/annals-18-19/part-1-webnotes.pdf , http://tug.org/pubs/annals-18-19/part-2-webnotes.pdf (can get some idea of the article from reading these footnotes…), http://tug.org/pubs/annals-18-19/extending-tex.pdf (Note end of this has an interesting remark about it being a good thing that TeX does not have a “real” programming language in it… maybe the takeaway is that macros, being built into TeX itself, are kind of universal and will work on all TeXs?), http://tug.org/pubs/annals-18-19/euler-summary.pdf (timeline of Euler project at Stanford)
Other?
- https://docs.microsoft.com/en-us/typography/script-development/kannada (intricacies of making an Indic-script font: this specific link is for Kannada)
More things (might be useful for my TeX project)
-
http://text-patterns.thenewatlantis.com/2014/07/designing-word.html → on non-justified paragraphs (still rare in books)
- https://github.com/ChrisKnott/Algojammer (wow… this is what a debugger should be) (HN thread: https://news.ycombinator.com/item?id=18321709)
- http://www.pythontutor.com/ — or this
- https://github.com/shreevatsa/pages/tree/0f9c146a770bbd4fe3c7f7174bb576e568dbaa5f/docs/dvi (reminder)
- https://3perf.com/talks/web-perf-101/ —
<figure>
and<figcaption>
side-by-side (https://news.ycombinator.com/item?id=18332753)
Working with files in JavaScript
In NodeJS, the fs
module has functions that read a file (by filename, say) and return a Buffer
. Such a Buffer
can be turned into a Uint8Array
or whatever.
const fs = require('fs');
const buffer = fs.readFileSync('hello.dvi'); // type Buffer
const uint8array = new Uint8Array(buffer); // type Uint8Array
We could also directly get a string, by passing in an encoding option. This may not be what we’d do for large files though (e.g. we may use fs.readSync
with a given number of bytes).
In a web browser: see here, e.g. if in the DOM one has:
<input type="file" id="inputDviFile"/>
then the element’s files
is a FileList
object, which contains File
objects. These can be read with a FileReader
, which after reading the file sets its result
member and calls its onload
.
document.getElementById('inputDviFile').addEventListener('change', handleFiles, false);
function handleFiles() {
const f = this.files[0];
const reader = new FileReader();
reader.onload = function() {
// reader.result is an ArrayBuffer
uint8array = new Uint8Array(reader.result);
// Do something with uint8array
};
reader.readAsArrayBuffer(f);
}
All seems a bit roundabout.
Sanskrit conjuncts (consonant clusters) by frequency
What do we want? (Background)
A consonant cluster is when two consonants occur consecutively without a vowel between them. In Indic scripts, a consonant cluster is written as a conjunct consonant, which in many fonts often has its own specific ligature. (See more at complex text layout.)
These vary in frequency: for example, क्त is quite common, while apparently6 ब्न never occurs in Sanskrit itself. We’d like to get a list of all consonant clusters that actually occur in Sanskrit, in decreasing order of frequency.
Why do we want it? (Testing)
For testing fonts or text layout programs, we’d like to focus on the more frequent cases, to avoid obsessing over unusual combinations that won’t make a difference to the reader for the most part. Further, to test these it’s usually enough to test how they behave on each “orthographic syllable” (roughly: a grapheme cluster / the text between two places where you can place your cursor), as they are rendered independently and kerning (spacing between them) is usually not an issue.
For example, the word “आत्मवान्” has four orthographic syllables: आ, त्म, वा, न् — note that phonetically, it has only three syllables / akṣaras (आ, त्म, वान् or perhaps आत्, म, वान् depending on your definition of syllable). This example illustrates the three different cases that can exist — when represented in Unicode, an orthographic syllable is either:
- An independent vowel, optionally followed by a modifier (ansuvāra / visarga / candrabindu) (examples: आ, उँ, आः),
- An optional sequence of [consonant] + [virāma], followed by a consonant (optionally including a nukta), optionally followed by a vowel sign (aka dependent vowel), and optionally followed by a modifier (examples: वा, त्म, फ़्जी, र्त्स्न्या),
- a consonant (optionally including a nukta) followed by a virāma — this only occurs at the end of a word (example: न्).
(I have some doubts about the last one: surely it can also be preceded by a sequence of consonant+virāma? Will become clear shortly. The above from here, and while I writing it I also found this and this.)
Anyway, as this shows, there is more to testing the font / system than simply the consonant clusters (e.g. the most immediately apparent problem in absence of any complex-text-layout being implemented is simply that, for example, कि will misrendered — a problem with the placement of the vowel sign). Nevertheless, once one is satisfied that vowel signs will be placed properly, the bulk of the remaining problem is testing various consonant clusters, and there one would like to see them in order of frequency.
Another reason is simple curiosity. :-)
What is available? (Links)
Collecting here some links I found while trying to satisfy my curiosity.
This page on Omniglot has a table of conjuncts, but no frequency. Besides it both has unrealistic conjuncts, and misses a few that actually occur.
This page on TITUS has a list (see “ligatures” at the bottom), again without frequencies or order. This test file in sanskrit-coders/sanskrit-fonts repo has some examples of testing, from TITUS. I got interested in this question (again) while trying to collect examples for this question.
There are a few useful documents by Ulrich Stiehl:
- See the extensive Itranslator 2003 manual, especially starting at page 29 which has a list of clusters with frequencies, but in alphabetical order. On page 76 it has a list of common orthographic syllables (again in alphabetical order), and on page 110, Hindi ligatures.
- The document “Svara Statistics of Taittiriya Brahmana” has accented syllables sorted by frequency in Vedic Sanskrit, but this is not quite what we want.
- There’s apparently a book, called “Conjunct consonants in Sanskrit”, which has exactly the kind of information we want. It is available for sale, for 19.8 Euros (as of today: 23 USD or Rs 1650). The free preview online has two pages’ worth (some 85) of the most common conjuncts.
Finally, thanks to this Jan-2012 thread on samskrita and this Dec-2013 one on bvparishat, I found Oliver Hellwig’s DCS data. It’s based on a large corpus, and there is a syllables.dat
file using which we can generate our own statistics.
First steps with this data (cleanup)
There is an R script online (and even Hellwig’s own one) and it looks like R is a nice language for dealing with data like this, leading to really short scripts. Might be worth learning R sometime.
For now, let’s do it the long way, with Python. The syllables.dat
file starts like this:
Syllable;early;epic;classical;medieval;late;total
"a";5174;57752;21378;13821;7089;105214
"thA";575;14773;8413;4675;2738;31174
"taH";413;17714;9469;5347;3529;36472
"sne";8;628;909;262;75;1882
"ha";1598;29472;15661;9526;6902;63159
so we can see that each row (line) consists of semicolon-separated values, and the top row is the header. We don’t care right now about the respective counts in different periods and only care about the total (which should be the sum of the numbers), so let’s parse just that:
def parseFile(filename):
"""Parses syllables.dat and returns total count for each syllable."""
lines = open(filename, 'rb').readlines()
assert lines[0] == b"Syllable;early;epic;classical;medieval;late;total\n"
total = {}
for line in lines[1:]:
parts = line.split(b';')
syllable = parts[0]
assert len(syllable) > 2 and syllable[0] == syllable[-1] == ord(b'"')
syllable = syllable[1:-1]
counts = [int(s) for s in parts[1:]]
assert sum(counts[:-1]) == counts[-1], counts
assert syllable not in total, syllable
total[syllable] = counts[-1]
return total
With this we can check that something like parseFile("syllables.dat")
will produce a dictionary like:
{'a': 105214, 'thA': 31174, 'taH': 36472, 'sne': 1882, 'ha': 63159}
(only much larger). Also, had to remove these two line from the input data:
"ti;";0;2;0;0;0;2
";ma";0;0;1;0;0;1
To get them in descending order, we can either write more code, or use the standard-library Counter
by changing
total = {}
to
from collections import Counter
total = Counter()
Along with some more cleanup of the syllable
, we have:
def parseFile(filename):
"""Parses syllables.dat and returns total count for each syllable."""
lines = open(filename, 'rb').readlines()
assert lines[0] == b"Syllable;early;epic;classical;medieval;late;total\n"
from collections import Counter
total = Counter()
seen = set()
for line in lines[1:]:
parts = line.split(b';')
syllable = parts[0]
assert len(syllable) > 2 and syllable[0] == syllable[-1] == ord('"')
syllable = syllable[1:-1]
if syllable[0] in [ord("2"), ord("3")]: continue # Weird junk
if syllable[0] == ord("'"): syllable = syllable[1:] # Remove avagraha
if syllable[0] == ord("'"): syllable = syllable[1:] # Sometimes twice
if syllable in [b"r'ddha", b"r'tha", b"r'dha"]: continue # More weird
assert b"'" not in syllable, (line, syllable)
if any(bad in syllable for bad in [b'\x9b', b'\xa0', b'\xa1', b'\xa4', b'\xb2', b'\xbb', b'\xe1', b'\xe2', b'\xe3', b'\xe6', b'\xed']):
continue
syllable = syllable.decode('ascii')
if any(c in syllable for c in [' ', '*', ',', '-', '.', '/', '0', '1', '2', '3', '4', '5', '6', '9', '<', '>', '?', '[', ']', '_', '{', '}', '~']):
print('%10s' % syllable, '\t', line.decode('ascii')[:-1])
continue
for c in syllable: seen.add(c)
counts = [int(s) for s in parts[1:]]
assert sum(counts[:-1]) == counts[-1], counts
total[syllable] += counts[-1] # Repeats e.g. "pi" and "'pi"
return total, seen
This still prints the following junk from the data:
-ya "-ya";0;0;0;1;0;1
[... au "[... au";0;11;10;29;8;58
1 Ze "1 Ze";0;0;0;0;1;1
5 Ze "5 Ze";0;3;0;3;0;6
4 Ze "4 Ze";0;0;2;7;0;9
?bi "?bi";0;0;1;0;0;1
n Ve "n Ve";0;0;0;2;1;3
rs / Sa "rs / Sa";0;0;0;1;1;2
la2 "la2";0;0;0;1;0;1
4zo "4zo";0;0;0;1;0;1
9zlSma "9zlSma";0;0;0;1;0;1
10 Ze "10 Ze";0;0;0;1;1;2
*i "*i";0;0;1;0;0;1
__a "__a";0;0;1;0;0;1
12 Ze "12 Ze";0;0;0;1;0;1
[la "[la";0;1;0;0;0;1
<p11>pra "<p11>pra";0;1;0;0;0;1
?a "?a";2;0;1;0;0;3
.dI ".dI";0;0;1;0;0;1
.yvI ".yvI";0;0;1;0;0;1
6 Ze "6 Ze";0;4;0;1;1;6
*rA "*rA";0;1;0;0;0;1
ra3 "ra3";0;0;0;0;1;1
re3 "re3";0;0;0;0;1;1
?di "?di";0;0;0;0;1;1
[hi] "[hi]";0;1;0;0;0;1
d{}vR "d{}vR";0;0;2;0;0;2
d{}bhu "d{}bhu";0;0;1;0;0;1
d{}dR "d{}dR";0;0;1;0;0;1
~ci "~ci";0;0;2;0;0;2
d{}gaM "d{}gaM";0;0;1;0;0;1
d{}gaH "d{}gaH";0;0;1;0;0;1
d{}ga "d{}ga";0;0;1;0;0;1
*sA "*sA";0;4;0;0;0;4
*kru "*kru";0;1;0;0;0;1
*kR "*kR";0;1;0;0;0;1
*yo "*yo";0;1;0;0;0;1
*ghR "*ghR";0;1;0;0;0;1
*pi "*pi";0;1;0;0;0;1
*sa "*sa";0;4;0;0;0;4
*srA "*srA";0;1;0;0;0;1
*pai "*pai";0;1;0;0;0;1
*mR "*mR";0;1;0;0;0;1
*va "*va";0;2;0;0;0;2
*pU "*pU";0;1;0;0;0;1
*le "*le";0;1;0;0;0;1
*pra "*pra";0;3;0;0;0;3
*ya "*ya";0;2;0;0;0;2
*nA "*nA";0;2;0;0;0;2
*bha "*bha";0;1;0;0;0;1
*vi "*vi";0;2;0;0;0;2
*ni "*ni";0;2;0;0;0;2
*dI "*dI";0;1;0;0;0;1
*pa "*pa";0;3;0;0;0;3
*e "*e";0;1;0;0;0;1
*I "*I";0;1;0;0;0;1
*a "*a";0;4;0;0;0;4
*dhA "*dhA";0;1;0;0;0;1
*cA "*cA";0;2;0;0;0;2
*da "*da";0;1;0;0;0;1
*ra "*ra";0;2;0;0;0;2
*ma "*ma";0;4;0;0;0;4
*sta "*sta";0;1;0;0;0;1
*ta "*ta";0;1;0;0;0;1
*sro "*sro";0;1;0;0;0;1
*ka "*ka";0;2;0;0;0;2
*do "*do";0;1;0;0;0;1
*zA "*zA";0;1;0;0;0;1
*pA "*pA";0;1;0;0;0;1
*rU "*rU";0;1;0;0;0;1
*sni "*sni";0;1;0;0;0;1
*ha "*ha";0;1;0;0;0;1
*di "*di";0;1;0;0;0;1
*vyA "*vyA";0;1;0;0;0;1
*gU "*gU";0;1;0;0;0;1
?i "?i";0;0;3;0;0;3
?ha "?ha";0;0;1;0;0;1
d?S?a "d?S?a";0;0;1;0;0;1
?kle "?kle";0;0;1;0;0;1
y.uH "y.uH";0;0;1;0;0;1
[... e "[... e";0;0;0;1;0;1
rs o "rs o";0;0;0;1;0;1
r Sa "r Sa";0;0;0;1;0;1
,R ",R";0;0;0;0;1;1
I3 "I3";1;0;0;0;0;1
-ttA "-ttA";0;0;1;0;0;1
di- "di-";0;0;1;0;0;1
So let’s switch to the converse (now that we’ve verified so much about the data, there’s nothing left that’s surprising, so we no longer need to check for it):
def parseFile(filename):
"""Parses syllables.dat and returns total count for each syllable."""
lines = open(filename, 'rb').readlines()
from collections import Counter
total = Counter()
for line in lines[1:]:
parts = line.split(b';')
syllable, count = parts[0][1:-1], int(parts[-1])
while syllable[0] == ord("'"): syllable = syllable[1:] # Remove avagraha
def ok(c): return c in b'ADGHIJLMNRSTUabcdeghijklmnoprstuvyz'
if any(not ok(c) for c in syllable): continue
total[syllable.decode('ascii')] += count
return total
(Compare this to the elegant fewer lines of the R code….)
Table of syllables
As a first step, we can generate a table of syllables from this counter.
def printSylTable(counter, filename):
"""Given counter of syllables print them in order, with relative frequency"""
total = sum(counter[syllable] for syllable in counter)
f = open(filename, 'w')
f.write('%10s %6s %8s %10s\n' % ('Syllable', 'Count', 'Relative', 'Scaled'))
for syl, count in counter.most_common():
rel = count * 1.0 / total
rel16 = rel * 16 * 100
f.write('%10s %6d %.6f %9.6f%%\n' % (syl, count, rel, rel16))
It starts like:
Syllable Count Relative Scaled
ta 178089 0.023538 37.660217%
va 174701 0.023090 36.943762%
sa 160169 0.021169 33.870701%
ma 155095 0.020499 32.797710%
ra 133892 0.017696 28.313943%
na 133852 0.017691 28.305484%
pa 128947 0.017043 27.268231%
ya 110678 0.014628 23.404913%
ca 108804 0.014380 23.008621%
a 105214 0.013906 22.249449%
vi 103500 0.013679 21.886992%
ka 93078 0.012302 19.683067%
ti 80521 0.010642 17.027657%
vA 77794 0.010282 16.450982%
pra 69557 0.009193 14.709116%
rA 68366 0.009036 14.457257%
da 68013 0.008989 14.382608%
mA 67364 0.008903 14.245365%
ni 67023 0.008858 14.173255%
te 65408 0.008645 13.831733%
nA 63752 0.008426 13.481541%
ha 63427 0.008383 13.412814%
za 62133 0.008212 13.139174%
ga 60557 0.008004 12.805899%
kA 59689 0.007889 12.622344%
tA 58595 0.007744 12.390998%
ja 51690 0.006832 10.930808%
la 51086 0.006752 10.803081%
bha 47810 0.006319 10.110310%
pA 47013 0.006214 9.941769%
yA 45719 0.006043 9.668129%
saM 44582 0.005892 9.427690%
sya 43210 0.005711 9.137555%
pu 40176 0.005310 8.495959%
The full output is available in a separate file here. The “Scaled” is just the “Relative” number scaled by 16x. (TODO: explain better what the columns mean.)
Table of consonant clusters
Every syllable above has a vowel “nucleus”, possibly with consonants on either side — these are what we’re interested in. What we need to do is split on these vowels.
def cluster(syllable):
clusters = re.split('[aAiIuUReo]+', syllable)
assert len(clusters) <= 2, (syllable, clusters)
# Prints only two: Sujh and sajh
# if len(clusters) == 2 and len(clusters[1]) > 1: print(syllable)
# So as far as clusters are concerned, we only care about the first part
first = clusters[0]
# Further, we only care if it's more than one consonant
consonants = ['k', 'kh', 'g', 'gh', 'G',
'c', 'ch', 'j', 'jh', 'J',
'T', 'Th', 'D', 'Dh', 'N',
't', 'th', 'd', 'dh', 'n',
'p', 'ph', 'b', 'bh', 'm',
'y', 'r', 'l', 'v', 'z', 'S', 's', 'h']
return '' if first in consonants else first
def clusterTable(counter):
from collections import Counter
table = Counter()
for syllable in counter:
c = cluster(syllable)
if c:
table[c] += counter[syllable]
return table
Calling clusterTable
with the result of parseFile
now gives a counter. We can print it out similarly, or rather generalize the previous function:
def printCounter(counter, filename, name='Syllable'):
"""Given counter print them in order, with relative frequency"""
total = sum(counter[syllable] for syllable in counter)
f = open(filename, 'w')
f.write('%10s %6s %8s %10s\n' % (name, 'Count', 'Relative', 'Scaled'))
for key, count in counter.most_common():
rel = count * 1.0 / total
rel16 = rel * 16 * 100
f.write('%10s %6d %.6f %9.6f%%\n' % (key, count, rel, rel16))
This starts:
Clusters Count Relative Scaled
pr 97826 0.052511 84.016880%
tr 68742 0.036899 59.038378%
sy 64940 0.034858 55.773069%
kS 59595 0.031989 51.182569%
nt 54123 0.029052 46.482997%
rv 46658 0.025045 40.071756%
st 46296 0.024851 39.760856%
ty 41860 0.022469 35.951042%
tv 40453 0.021714 34.742654%
zc 36130 0.019394 31.029889%
tt 33303 0.017876 28.601948%
kt 33169 0.017804 28.486864%
vy 29927 0.016064 25.702504%
ST 27932 0.014993 23.989118%
ry 27037 0.014513 23.220457%
ddh 26814 0.014393 23.028935%
ny 26769 0.014369 22.990288%
rm 26663 0.014312 22.899251%
sv 26208 0.014068 22.508478%
dr 24735 0.013277 21.243407%
dv 24184 0.012981 20.770186%
dy 23364 0.012541 20.065937%
zr 22966 0.012328 19.724119%
sm 22620 0.012142 19.426960%
kr 22183 0.011907 19.051647%
rN 20483 0.010995 17.591619%
rth 20447 0.010975 17.560701%
sth 19946 0.010707 17.130422%
Gg 19327 0.010374 16.598800%
jJ 19028 0.010214 16.342007%
ND 18753 0.010066 16.105826%
rt 17990 0.009657 15.450531%
zv 17153 0.009207 14.731682%
STh 17045 0.009149 14.638928%
br 16505 0.008859 14.175154%
Compare with the preview mentioned earlier — the ranking has many substantial differences:
US: pr tr st sy śc nt rv kṣ ty rm tv tt ny ry ddh vy dr śr dy kr dv nn sm rth ṣṭ kt sv
OH: pr tr sy kṣ nt rv st ty tv śc tt kt vy ṣṭ ry ddh ny rm sv dr dv dy śr sm kr rṇ rth
For example, “st”, “rm”, “ny” ranked higher by US than in OH’s data. Would be interesting to explore further. E.g. perhaps it has to do with the full corpus versus just the Mahābhārata?
Cleaned-up tables
We can take all this and put it in a separate repository. GitHub can render CSV and TSV data; so that may be better than this format with an ad-hoc number of spaces.
from collections import Counter
import re
from indic_transliteration import sanscript
from indic_transliteration.sanscript import transliterate
def parseFile(filename):
"""Parses syllables.dat and returns total count for each syllable."""
lines = open(filename, 'rb').readlines()
total = Counter()
for line in lines[1:]:
parts = line.split(b';')
syllable, count = parts[0][1:-1], int(parts[-1])
# Remove avagraha
while syllable and syllable[0] == ord("'"): syllable = syllable[1:]
def ok(c): return c in b'ADGHIJLMNRSTUabcdeghijklmnoprstuvyz'
if not syllable or any(not ok(c) for c in syllable): continue
total[syllable.decode('ascii')] += count
return total
def cluster(syllable):
clusters = re.split('[aAiIuUReo]+', syllable)
assert len(clusters) <= 2, (syllable, clusters)
# Prints only two: Sujh and sajh
# if len(clusters) == 2 and len(clusters[1]) > 1: print(syllable)
# So as far as clusters are concerned, we only care about the first part
first = clusters[0]
# Further, we only care if it's more than one consonant
consonants = ['k', 'kh', 'g', 'gh', 'G',
'c', 'ch', 'j', 'jh', 'J',
'T', 'Th', 'D', 'Dh', 'N',
't', 'th', 'd', 'dh', 'n',
'p', 'ph', 'b', 'bh', 'm',
'y', 'r', 'l', 'v', 'z', 'S', 's', 'h']
return '' if first in consonants else first
def clusterTable(counter):
table = Counter()
for syllable in counter:
c = cluster(syllable)
if c: table[c] += counter[syllable]
return table
def transliterated(text, scheme):
"""Transliterate consonant cluster from HK to other scheme"""
assert scheme in ['HK', 'IAST', 'DEVANAGARI']
return transliterate(text, sanscript.HK, getattr(sanscript, scheme))
def printCounter(counter, filename, name='Syllable', transliteration='HK'):
"""Given counter print keys in order, with relative frequency"""
total = sum(counter[syllable] for syllable in counter)
f = open(filename, 'w')
f.write('%s\t%s\t%s\t%s\n' % (name, 'Count', 'Relative', 'Scaled'))
for key, count in counter.most_common():
rel = count * 1.0 / total
rel16 = rel * 16 * 100
if name == 'ConsonantCluster' and transliteration == 'DEVANAGARI':
key += 'a' # Base form is without virama
tkey = transliterated(key, transliteration)
f.write('%s\t%d\t%.6f\t%.6f%%\n' % (tkey, count, rel, rel16))
if __name__ == '__main__':
c = parseFile('syllables.dat')
cc = clusterTable(c)
for scheme in ['HK', 'IAST', 'DEVANAGARI']:
printCounter(c, 'syllables-%s.tsv' % scheme, 'Syllable', scheme)
printCounter(cc, 'conjuncts-%s.tsv' % scheme, 'ConsonantCluster', scheme)
Put the above as gen.py
inside a directory containing syllables.dat
, and run with:
python3 -m venv tutorial_env
source tutorial_env/bin/activate
pip install indic_transliteration
python gen.py
Hindsight
It appears that all we’ve done so far is take the syllables.dat
file, and perform a minor transformation on it (identify clusters), producing another data file (table). Moreover, although the primary motivation was to get them in decreasing order of frequency, when it comes to actually using this data we may want to (e.g.) restrict to only the top N, and instead look at them in say alphabetical order. (This is how they are arranged in the books / sources mentioned above.)
This is a good task for a spreadsheet. You can retain more dimensions of data (e.g. the per-period data columns), and the user can sort in whatever order they want, etc.
The only nontrivial things we’ve done, really are:
- Remove avagrahas / filter out messy rows, and
- Identify the consonant cluster for each syllable.
These could also be done in a spreadsheet. That will be the next step.
Art and “kitsch”
There’s a series of three articles (podcasts?) by (Sir) Roger Scruton, here on the BBC:
- (2014-12-05) https://www.bbc.co.uk/programmes/b04sy4tv under title “Faking it” https://www.bbc.com/news/magazine-30343083 under title “How modern art became trapped by its urge to shock”
- (2014-12-12) https://www.bbc.co.uk/programmes/b04tlr0g under title “Kitsch” https://www.bbc.com/news/magazine-30439633 under title “A Point of View: The strangely enduring power of kitsch”
- (2014-12-19) https://www.bbc.co.uk/programmes/b04v66nl under title “Art: The Real Thing” https://www.bbc.com/news/magazine-30495258 under title “A Point of View: How do we know real art when we see it?”
It’s interesting that someone can see the truth partially, and still get so much wrong. Will reread and write more here…
More TeX links
(Didn’t I have a section with that name just above?)
SAILDART
file:///usr/local/texlive/2017/texmf-dist/doc/generic/knuth/web/webman.pdf
http://blog.brew.com.hk/working-with-files-in-javascript/
https://developer.mozilla.org/en-US/docs/Web/API/File/Using_files_from_web_applications
https://www.preining.info/blog/2015/04/tex-live-the-new-multi-fmtutil/
https://www.preining.info/blog/2013/07/internals-of-tex-live-1/
https://www.overleaf.com/learn/latex/Articles/
https://github.com/shreevatsa/pages/tree/0f9c146a770bbd4fe3c7f7174bb576e568dbaa5f/docs/dvi
(web2w) https://www.tug.org/TUGboat/tb38-3/tb120ruckert.pdf
http://ctan.math.washington.edu/tex-archive/web/web2w/web2w.pdf
CS and interviews
Interesting thread on Hacker News: https://news.ycombinator.com/item?id=18445609
Someone says they’re a programmer but can’t solve homework problems in CS.
Most of the comments (especially the highly voted ones) are about job interviews, which is not at all relevant to the question at hand. See this again and again: bring up any mention of algorithms, competitive programming etc, and suddenly everyone wants to gripe about interviews.
See e.g. https://news.ycombinator.com/item?id=16952222 where the top comment starts with “The most irritating thing with these…” (and author makes clear they mean interviews), even though the book is written for people who enjoy it.
I get it, job interviews are stressful and bad etc. I’ve experienced my share of it, and the only real advice I have from my experience is that to do well in interviews, it helps if you absolutely don’t want the job — but I know that’s nearly useless advice.
Devamārga
Should write this up.
See https://books.google.com/books?id=S4p6DQAAQBAJ&pg=PA531&lpg=PA531 (PUP Sundarakāṇda) which compares commentators (expand those abbreviations when quoting for post).
See https://books.google.com/books?id=pPVJCgAAQBAJ&pg=PA453&lpg=PA453 (van Buitenen MBh)
See http://kjc-sv013.kjc.uni-heidelberg.de/dcs/index.php?contents=fundstellen&IDWord=86817 (DCS search)
Of course quote Ryder Pañcatantra.
See Apte and MW.
CTAN mirror stuff
See https://ctan.org/mirrors/register/
The current solution being used by PG is to daily rsync
, then git commit
. This has resulted in a git repository of size approaching 1 TB.
If I were to keep this design, then either:
-
I have to figure out whether it’s ok (and inexpensive, e.g. ideally not nonstandard as that has other issues too) to keep such a large directory on any of the cloud providers,
OR
-
come up with a solution that shunts away all the actual git objects (ancient blobs) to somewhere remote, sharded on multiple disks etc.
Alternatively, I ignore all this, and come up with a different version control system. E.g. the SVN / CVS / RCS model might be enough: everything is monotonic (timestamps), i.e. the graph is linear and we don’t have to support merge, etc. The state as of a certain date doesn’t need to be encoded (blobs → tree → commit etc), but can just be computed as the state of each individual file as of that date. (Right? Some complication involving deleted paths but that’s probably ok too – latest version will just be “deleted”, the way it shows up in CS: https://cs.chromium.org)
Possible design:
For each path that is a filename, store metadata:
- Chronological list of (timestamp, sha1) (including a special sha1 for “deleted”)
For each path that is a directory, store metadata:
- Flat (i.e. no-history) list of every subdir / file that has ever been a child of it.
For each sha1, store the data (blob).
(Potential problem: )
What about compression? This will probably compress MUCH worse than git (with its delta encoding etc). But, it shards much more easily.
Is this good enough?
I’d have to try it and see :-(
Also have to implement the “find the diff and decide which files need to be updated” thing. But maybe “for every file in the directory tree, find its sha1” will automatically handle that, by going over the entire repo every day.
See https://www.tarsnap.com/index.html for some interesting ideas. (It uses S3, but not Glacier.)
See pricing comparison at the bottom of this page: https://cloud.google.com/storage/pricing-summary/ (looks like except for Glacier, GCP is generally cheaper)
Angles are easier to approximate from above
[Update: This is not actually correct. Ignore everything.]
A surprising and counterintuitive fact that becomes obvious in hindsight, like so many things in mathematics. Suppose we want to pick three points on a square lattice:
[figure]
such that the angle made by them is as close as possible to a certain angle (say 60°). (This problem comes from this post by MJD, motivated by the problem of drawing a good approximation to such an angle on a piece of graph paper.)
Then, the surprising fact is that we’ll get better approximations that are just over 60° than those that are just below 60° (and the same is true for any acute angle, while the opposite is true for <…> angles).
Let’s see what this means, and why this is true.
Formulating the problem
The square lattice can be considered as the set of all points in the plane with integer coordinates, i.e. all points $(x, y)$ where $x$ and $y$ are both integers. Without loss of generality, we can consider the point where the angle is made as $(0, 0)$, and call the other two points $(a, b)$ and $(c, d)$. That is, we want the angle formed by the three points $(c, d)$, $(0, 0)$ and $(a, b)$ to be as close to $\theta$ (say $60°$ aka $\pi/3$) as possible. To quote from the original post:
we want to find $P=⟨a,b⟩$ and $Q=⟨c,d⟩$ so that the angle $α$ between the rays $\overrightarrow{OP}$ and $\overrightarrow{OQ}$ is as close as possible to $\pi/3$.
What is the angle formed by these three points? It is easiest to use complex numbers (and they help fornulate the problem in another way, as we’ll see later). Each point $(x, y)$ in the plane can be considered as the point $x + iy$ in the complex plane. Then, the angle and scaling by which the vector $\overrightarrow{OP}$ needs to turn to reach $\overrightarrow{OQ}$ is given by their division:
\[\dfrac{c + id}{a + ib} = \dfrac{(c+id)(a-ib)}{(a+ib)(a-ib)} = \dfrac{(ac+bd) + i(ad-bc)}{a^2+b^2}\]If we want just the angle, it is the argument of this complex number, namely an angle $\alpha$ such that $\tan(\alpha) = \dfrac{ad-bc}{ac+bd}$. And if we want $\alpha$ to be close to $\pi/3$, that means we want $\tan \alpha$ to be close to $\tan (\pi/3) = \sqrt{3}$.
So in short: we want to find four integers $(a, b, c, d)$ such that $\dfrac{ad-bc}{ac+bd}$ is close to $\sqrt{3}$ (or in general, $\tan \theta$).
The observation
Suppose we try all possible $4$-tuples of integers $(a, b, c, d)$, in which all four coordinates are no larger in magnitude than a certain $M$, and look at how close the angle gets to $60°$. We can tabulate the “record-setting” approximations, i.e. those that beat anything “smaller”, and also the closest ones at the end.
With $M = 50$, this is what we see:
Note that the “record” approximations are all (after the first two) greater than $60°$, and that among the “closest” approximations, there are many more that are over $60°$ than are under.
Why is this?
Reformulating the problem
The “usual” problem, where we want $p/q$ to be close to a real number $α$, can be thought of as finding Gaussian integers $(q + ip)$ that lie close to the line $y=αx$ in the complex plane, i.e. points $z$ on the lattice such that $\mathrm{Im}(z)/\mathrm{Re}(z) ≈ α$.
Here, with $α=√3$, we want to solve the same problem, i.e. find rational numbers $p/q$ close to $α$, except that $p/q$ must further be of the form $(ad-bc)/(ac+bd)$ for some $(a, b, c, d)$.
Here’s the thing: this is precisely the same as saying that $(q + ip)$ is not a Gaussian prime! This is because $(a-ib)(c+id) = (ac+bd) + i(ad-bc)$, by the Diophantus–Brahmagupta–Fibonacci identity. So any $(q + ip)$ of the form $(ac+bd) + i(ad-bc)$ can be written as the product of two Gaussian integers, and vice-versa.
So the relation between the Diophantine approximation problem and this one is that while there we want to find Gaussian integers close to the line $\mathrm{Im}(z)=α\mathrm{Re}(z)$, here we want to find composite Gaussian integers close to the line $\mathrm{Im}(z)=α\mathrm{Re}(z)$.
Explanation
Note that we’re trying to approximate $√3$ by a fraction of the form $(ad-bc)/(ac+bd)$. The closest fractions we get (among $a, b, c, d$ below a given magnitude) could be either below or above $\sqrt3$.
Consider an angle of $(π/3+ε)$. If you look at $\tan(π/3+ε)-\tan(π/3)$, it works out to be $(\tan(ε) + 3ε) / (1 - \sqrt3\tan ε)$. When $ε$ is replaced by $-ε$, if you look at $\tan(π/3)-\tan(π/3-ε)$, it works out to be $(\tan(ε) + 3ε) / (1 + \sqrt3\tanε)$.
For small ε, the former, namely $(\tan(ε) + 3ε) / (1 - \sqrt3\tan ε)$, is larger than the latter, namely $(\tan(ε) + 3ε) / (1 + \sqrt3\tanε)$. And by a factor of $(1 + \sqrt3\tanε)/(1 - \sqrt3\tanε) \approx 1 + 2\sqrt{3}ε$.
(More generally, this boils down to the fact that the second derivative of $\tan(x)$ is positive.)
$\tan(x+y) - \tan(x) = (\tan x + \tan y - \tan x + \tan x \tan x \tan y)/(1 - \tan x \tan y) = (\tan y)(1 + \tan^2 x)/(1 - \tan x \tan y)$
$\tan(x) - \tan(x-y) = (\tan x + \tan x \tan x \tan y - \tan x + \tan y)/(1 + \tan x \tan y) = (\tan y)(\tan^2 x + 1)/(1 + \tan x \tan y)$
When $\tan y$ is positive, the former is greater by a factor of $\frac{1 + \tan x \tan y}{1 - \tan x \tan y}$
This means that it’s easier (you’re allowed a larger difference in the angle) to achieve a given closeness of the fraction $(ad-bc)/(ac+bd)$ to $\sqrt3$ by picking the angle to be greater than $π/3$ ($=60°$) than if the angle is less than $π/3$.
As denominators get larger, the primes get sparser, so this becomes closer to the usual problem: $\sqrt3$ becomes about as easy to approximate to a given closeness from below as from above (by fractions of the form $(ad-bc)/(ac+bd)$), but among roughly equally distant approximations, the ones from above are closer in angle than the ones from below.
Acknowledgements
This post by MJD and ensuing discussion on Hacker News.
TikZ notes
Why these
(My standard “howto” versus “from the bottom up” rant: magic incantations, you forget, you’re not aware of what’s possible and what’s not, you build up a mental model that does not match reality — conversely if you understand a few things, then you can ignore the rest as “shortcuts”. You may not do things in the most elegant way, but you’ll be able to get things done.)
Should I use TikZ?
Use it if you need to create a picture, and you prefer to specify it rather than draw it: with TikZ, instead of drawing a picture with a mouse or pencil, you write down specifications for what the picture should look like.7
Alternatives:
- use another “specification” method (like
pstricks
,xypic
,metapost
), or - just draw the picture in some visual program (some like
xfig
can even convert to TikZ).
(Aside: Bundled with TikZ or PGF are also a bunch of general TeX macro libraries like pgfkeys
or pgffor
, and the interesting-looking pgfpages
, but those are probably not why you’re reading this right now.)
[Technical] How is TikZ implemented?
TikZ is built on top of:
-
a PGF “system” layer that is a compatibility layer between the different formats of DVIPS specials, DVIPDFMX specials, PDF instructions, etc., and
-
a “basic” layer (library of commonly used functions, on top of the “system” layer) that is made up of the “core” and “modules”.
Its syntax is a mixture of METAFONT + PSTRICKS + other stuff.
[Concepts] What is a picture made of?
(Check this) In TikZ, a picture is made of paths. Paths can be drawn, filled, etc.
A path is a series of straight lines and curves that are connected […]. You start a path by specifying the coordinates of the start position as a point in round brackets, as in (0,0). This is followed by a series of “path extension operations.”
Can I see some examples?
\input tikz
\tikz \draw[thick,rounded corners=8pt]
(0,0) -- (0,2) -- (1,3.25) -- (2,2) -- (2,0) -- (0,2) -- (2,2) -- (0,0) -- (2,0);
\bye
Syntax [Note to self: ideally only one of this will be shown to the reader…]:
Common form:
- plain TeX:
\tikzpicture ... \endtikzpicture
- LaTeX:
\begin{tikzpicture} ... \end{tikzpicture}
Shortcut for above:
\tikz {<path commands>}
How can I draw a line?
Use \draw
and give the path, ending with a semicolon, e.g. \draw (-1.5,0) -- (1.5,0);
This is a shortcut for \path[draw] (-1.5,0) -- (1.5,0);
.
Simple time logger
While reading How to Live on 24 Hours a Day, I thought I’d like to try logging my time. Tried Toggl for a couple of days (from reviews it seemed one of the better pieces of software) and gave up. Its default mode is that you start a timer when you start working on something, and stop it when done, which doesn’t work for me sadly — e.g. when I get distracted and do something else entirely. It’s also all too easy to just not log time (fail to start a timer) for hours on end. (Of course nothing can really solve that problem, but read on…)
I also remembered later that I had earlier tried gtimelog (and its Mac clone, Mactimelog), and had reasonable success for at least a little while. See also mention in this thread.
So, outlining some desirable features of a personal time tracker for distraction-prone me:
- Time tracking must be done after-the-fact, i.e. you should enter what you actually did in some time interval, not what you think you’re going to do. (This idea comes from gtimelog.)
- This also solves the problem of starting a timer for something and getting distracted or pulled into something else: the only other way to solve that would be to require confirming that you’re still doing the same thing every minute or 5 minutes or whatever.
- Should be easy to say that in an interval you did the same as in the previous one, thereby “extending” it.
- That way, if you worked on something for an hour, you could just tap the same button every few minutes.
- Every minute of every day should be covered. (Showing as “untracked time” is ok.)
- Focus on capturing stuff. I definitely don’t need things like projects or billing/invoicing or whatnot!
- Export to a simple text file. No fancy databases or proprietary backends or whatever.
- Although: some sort of reviewing later would be nice, but that can even be a separate application.
- No binary classification into “work” and “non-work”, or “good” and “bad”.
- This is probably what ultimately made me stop logging in gtimelog, and from emails I exchanged with the developer of gtimelog at one point, what probably made him give up (at the time as well) as well: depending on your relationship with yourself, it can be depressing to know much time you’re procrastinating. But there’s no “good” and “bad”, it’s always some aspect of your self that is being gratified; so value it, and perhaps at most use the data to start an internal conversation about shifting the balance (if necessary).
- A timer for the current interval would be nice. (Idea from Toggl)
- Should work from a mobile device. A mobile-friendly web UI is probably enough.
With all that in mind, we can come up with the following design.
Design
Data model
Keep it as simple as possible: lines of text, of the format <timestamp> <message>
. The lazy programmer in me is tempted to have “seconds since epoch in UTC” but probably more convenient to have an actual human-readable timestamp in local timezone. Also, for convenience, probably can also include the end time and elapsed duration, with the understanding that they are to be ignored when processing the file.
So lines like:
%Y-%m-%d %H:%M:%S %z (%d min from %H:%M) This is what I did
Note that this “more convenient” data model requires
Persistence
For now, nothing. Maybe you can “download” the timestamp file or see it in the browser console, or inject it via the console. Later, we could persist somewhere, maybe Firebase or whatever. Needs auth and all that.
UI
This is the main thing. At any given time, the UI shows:
(big) <timer, in minutes:seconds>
(small) What did you do?
(medium) <same as previous> <untracked time> <something else: type>
If you tap the “same as previous” button, it extends the previous one. (Overwrites the timestamp.)
Let’s go into more detail – for this UI to work, we need the following:
- The timer needs a start time, and access to the current time
- Need the data file (at least the last line) to write to, and read from
More
Hmm…
- If $n \equiv 0 \pmod 9$, then $S(n^k) \equiv n^k \equiv 0 \pmod 9$, so $9p$
- If $n \equiv 1 \pmod 9$, then $S(n^k) \equiv n^k \equiv 1^k \equiv 1 \pmod 9$, so $9p$
- If $n \equiv 2 \pmod 9$, then $S(n^k) \equiv 2^k \equiv 2, 4, 8, 7, 5, 1 \pmod 9$, so $\frac{9}{6} p = \frac32p$
- If $n \equiv 3 \pmod 9$, then $S(n^k) \equiv 3^k \equiv 3, 0 \pmod 9$, so $0$
- If $n \equiv 4 \pmod 9$, then $S(n^k) \equiv 4^k \equiv 4, 7, 1 \pmod 9$, so $\frac93 p = 3p$
- If $n \equiv 5 \pmod 9$, then $S(n^k) \equiv 5^k \equiv 5, 7, 8, 4, 2, 1 \pmod 9$, so $\frac96p = \frac32p$
- If $n \equiv 6 \pmod 9$, then $S(n^k) \equiv 6^k \equiv 6, 0 \pmod 9$, so $0$
- If $n \equiv 7 \pmod 9$, then $S(n^k) \equiv 7^k \equiv 7, 4, 1 \pmod 9$, so $\frac93p = 3p$
- If $n \equiv 8 \pmod 9$, then $S(n^k) \equiv 8^k \equiv 8, 1 \pmod 9$, so $\frac{9}{2}p$
Total: $9p + 9p + 3p + 3p + 3p + \frac92p = (27 + \frac92) p$.
Average: $(3 + \frac12)p$.
And $p$ itself is $\frac29 \frac{1}{\log N}$, so this total probability is $(3 + \frac12)(\frac29)\frac{1}{\log N}$ which is $\frac79\frac{1}{\log N}$. Still about a factor of $2$ less than the numbers we see…
CSS Notes, round 2
This time I’m going to try writing a “CSS layout for TeX users”, aka “CSS layout from the outside in” or “Layout with CSS: A procedural approach”.
The idea is to describe a minimal subset of CSS layout properties with which it’s possible to get stuff done, while staying compatible with the inclusion of these “restricted language” blocks of elements into others that may be using different CSS.
Note 1: This is intended for “casual” CSS users, i.e. those who’ll occasionally create their personal web page or a page for some particular project, and then are likely to forget everything they learned about CSS until they need it again many months later. In particular it does not address things like large teams of people working on a common style file, or a requirement to make something look exactly according to some spec (well of course we’d like to be able to achieve whatever we want, but if you’re the solo person on the project sometimes you can afford to pick a different design that you know how to implement), etc.
Note 2: You may want to just use some framework like “Bootstrap” or “Foundation”, instead of trying to code CSS yourself. But if for some reason that’s not your preference, then read on.
Before that, notes from reading Rachel Andrew’s The New CSS Layout
-
A formatting context is the environment into which a set of related boxes are laid out. Different formatting contexts lay out their boxes according to different rules. For example, a flex formatting context lays out boxes according to the flex layout rules [CSS3-FLEXBOX], whereas a block formatting context lays out boxes according to the block-and-inline layout rules [CSS2].
Basically: the layout procedure/algorithm that is used.
There are many ways in which an element can create a new formatting context; the one (new addition) recommended explicitly for that purpose is “display: flow-root;” – which says that the inner display property is that of a new flow root.
In this default(?) formatting model:
-
Block-level elements “use up” the full width of the container (of the formatting context) even if their specified width is smaller, i.e. each one appears on a new line. Like hboxes inside a vbox.
-
Inline elements appear next to each other if there’s room for them, else on a new line, like words in a paragraph.
Aside 1: Floats. Elements that are floating don’t count in this. We can still use floats for “cutouts” / parshape, using “shape-outside”.
Aside 2: On position:
-
position:static is the default
-
position:relative does nothing by default, but is useful to set on a container so that its children can be relative to it instead of relative to some other ancestor. (So why not set it always?)
-
position:absolute, position:fixed – relative to some ancestor and relative to offset, respectively. Both break out of flow, and we need a plan for how to prevent something from overlapping (going under them). If using fixed, then position:sticky is slightly better/cooler alternative.
Aside 3: Multiple-column layout (like LaTeX’s multicol): on a container, set
-
column-width
to specify the ideal width (hsize) of each column, -
column-count
to specify the ideal (or max, ifcolumn-width
is specified) number of columns.
(Example usage: specify a sufficient column-width
to contain the entries (e.g. a list of checkboxes or whatever), and then if there’s more width on the screen, we’ll automatically get more columns. One tragedy is that if we do have something wider than that width (e.g. a long paragraph!) then it will be restricted to the specified column-width which may be say 51% of the available width. So while we’d ideally like “pick the number of columns based on the available width, then lay out such that those many columns fill the available width”, this doesn’t do the second part of that.)
Anyway, back to the formatting models/contexts.
The one of interest now is the flex formatting context. Set it with display: flex
. It can either behave like a hbox
(no wrap, elements shrink up to their common min-content-size e.g. longest word in a paragraph), or (what I’m calling) a hlist
(use flex-wrap: wrap
).
Unfortunately, this also requires the children to do something (have a “flex” property, wtf sigh). But maybe that’s an acceptable price to pay, if it has no other undesirable side effects?
If so, it seems like the flex formatting context is a strict generalization of at least the behaviour of inline elements in the default context. (Can we also achieve the equivalent of block elements?)
Grid:
-
gap between items with
gap
(orrow-gap
andcolumn-gap
) -
specify which columns a particular item spans, with
grid-row
andgrid-column
: this can either be a single number like3
(equivalent to3 / 4
) or two numbers like3 / 4
meaning the area between 3.0 to 4.0 (starting at 1.0 (WTF again)). -
more visual / general: give names to the areas, like
“grid-template-areas: "a a b" ". d d" "c e e";
but each has to be rectangular :-)
See https://developer.mozilla.org/en-US/docs/Web/CSS/display
-
obviously, for an element’s “outer display type”, we want “inline”, not “block”: imagine laying out the children of an element one by one, and suddenly one of them decides it wants to occupy a full row! A possible exception is things we’re intentionally inserting as hboxes inside a vlist.
-
For an element’s “inner display type”, it seems that “flow-root” would make the most sense, but unfortunately it’s not well-supported yet (https://caniuse.com/#search=flow-root) so we’re left with “inline-flex” as probably the best option?
Ah: According to the compatibility tables at https://developer.mozilla.org/en-US/docs/Web/CSS/display-inside there is ZERO support for multiple keyword values… so, so much for that.
See https://developer.mozilla.org/en-US/docs/Web/CSS/Layout_mode for a list of layout modes.
Ultimately we’ll try to achieve three kinds of divs, and that will take us quite far:
-
hbox
– lay out the children strictly left to right -
vbox
– lay out the children strictly top to bottom -
hlist
– lay out the children left to right, wrap at width.
More general would be to allow stretchable glue, but let’s stick with this for now.
This worked, for an overflowing row:
.hbox {
overflow-x: scroll;
white-space: nowrap;
display: flow-root;
}
.cell {
display: inline-block;
white-space: pre-wrap;
}
– things that didn’t work were display: inline-block
and display: inline-flex
(both just overflowed instead of scrolling).
(Edited later: Turns out it’s better to use flex, with display: inline-flex
for everything. Then .hbox
is simply the default, and .vbox
is simply flex-direction: column
. More later.)
Also see:
- https://css-tricks.com/snippets/css/a-guide-to-flexbox/
- https://css-tricks.com/fixing-tables-long-strings/
/usr/local/bin/2to3 -> /Library/Frameworks/Python.framework/Versions/3.6/bin/2to3 /usr/local/bin/idle3 -> /Library/Frameworks/Python.framework/Versions/3.6/bin/idle3 /usr/local/bin/pydoc3 -> /Library/Frameworks/Python.framework/Versions/3.6/bin/pydoc3 /usr/local/bin/python3 -> /Library/Frameworks/Python.framework/Versions/3.6/bin/python3 /usr/local/bin/python3-config -> /Library/Frameworks/Python.framework/Versions/3.6/bin/python3-config /usr/local/bin/pyvenv -> /Library/Frameworks/Python.framework/Versions/3.6/bin/pyvenv
-
(Actually trying this on the file
2850-0.txt
gives output like3181 the 2205 1643 1583 of 1524 and 1383 to 1228 a 1203 i 1026 that 897 in 767 he 733 was 709 you 691 it 621 his 580 is 504 have 464 had 449 with 413 my 410 for 406 we 402 which 393 as 350 at
so clearly some work is needed.) ↩
-
This useful image of a “customer” and a “seller” comes from Pratt’s paper, mentioned later. ↩
-
(In the terminology of computational complexity theory: This number $d$ serves as a short “certificate” of $N$’s compositeness. If $\mathrm{PRIMES}$ denotes the set of prime numbers and $\mathrm{COMPOSITES}$ denotes the set of composite numbers, then this proves that $\mathrm{COMPOSITES}$ is in the complexity class $\mathsf{NP}$. So by showing that the complement of $\mathrm{PRIMES}$ is in $\textsf{NP}$, it shows that that $\mathrm{PRIMES}$ is in the complexity class $\textsf{co-NP}$.) ↩
-
In the terminology of computational complexity theory: showing that $\mathrm{PRIMES}$ is in $\textsf{NP}$ is less trivial. ↩
-
Three years after his PhD with Donald Knuth, and a year after the publication of the paper on the Knuth–Morris–Pratt algorithm. ↩
-
According to Ulrich Stiehl (see later); also confirmed by the data below. ↩
-
(This gives some idea of when not to use TikZ: sometimes it may be easier to just draw the picture directly, especially if you are a visual person who’s good at drawing. Conversely, TikZ does not completely absolve you of visual thinking; you still have to know (mostly) what picture you want, in order to specify it.) ↩
(Thanks for reading! If you have any feedback or see anything to correct, contact me or edit this page on GitHub.)