(from the final 9 seconds of this video)
Kowloon City Cross Section, This is Colossal scan, 4500×1636
Kowloon City Cross Section, Kowloon Large Illustrated (1997), Amazon book scan, 2560×956
Composite Kowloon City Cross Section, 3166×925
Kowloon City Cross Section, deconcrete.org scan, 4716×1754
I first saw an image of the cross section of Kowloon Walled City in 2014. It is a wonderful illustration of the infamously dense city within a city that once existed in Hong Kong. The attention to detail is extraordinary, with a dozen birds, hundreds of glowing orange human silhouettes, and the outlines of thousands of household objects filling up the canvas. Each room is unique, every shape is different from the others of its type. The messes of television antennae on the roofs initially appear to be trees, while the plants inside apartments are colored solid green, setting them apart from the other hollow, unfilled objects surrounding them.
The busy city shines in its full 4500×1636 resolution. Every silhouette tells a story. As waking arms stretch over a bottom bunk bed, someone else falls asleep outside on the roof. Behind a wall on which a man carrying a large bag leans, a pair of people sit facing each other across a folding table and a small child climbs up on a counter. It’s not clear whether the man with the bag is aware of the people on the other side or if they are strangers to him.
Something else about the image stands out: it was a sloppy scan, of shoddy and inconsistent quality. There are obvious vertical division lines showing where the separate scans overlapped (not unlike the idea of “seams” in some digital images, which I wrote a bit about earlier this week), and varying color profiles and levels. The bottoms of the page numbers and some text are cut off on the edges, and entire vertical slices of buildings gradually slide into the dividing gutter on the left side of the image.
I am pretty sure that the scan which experienced a mini-revival on the internet a few years ago was from a This Is Colossal article, published November 4, 2014, which featured the illustration and the book it originally appeared in. The book’s title is in Japanese, but is translated on Amazon as Kowloon Large Illustrated (1997).
By my count, this same 4500×1636 image, first appearing on This Is Colossal, appears 46 different times at full resolution on Google Image Search, along with many more smaller versions. This is a very large amount of copies. I consider myself to be a moderately experienced Image Searcher, and I rarely see so many different domains hosting unique Google-accessible copies of the same high-resolution image. To me, this is a sign of an underground classic image.
A few months later, sometime in early 2015, I had access to a good printer, so I printed out the beautiful Kowloon cross section on some nice paper and hung it up on a wall above a new, uncomfortable ugly chair that I never sat in. The following year I moved to a different city, and the glossily posterized Kowloon has been rolled up in a tube in my closet ever since.
A @smashedmcdouble tweet from the other day reminded me of the Kowloon cross section, and upon revisiting it I immediately noticed something new in the familiar image. Not only were certain page numbers and text cut off by the sloppy scan, but almost an entire page – “Page 12” – was missing as well. Knowing that I wouldn’t be able to adequately explain the issue in 280 characters (a hunch supported by this post), I made a reference image to highlight the missing section and posted it on Twitter.
Shortly thereafter, @justinetlai blew the lid off the entire mystery, finding a different scan on the Amazon page for Kowloon Large Illustrated, the book which contains the cross section. The listing for the over-sized, 14.3″ x 10.3″ book includes three preview photos: the front cover, back cover, and the Kowloon Walled City cross section with the elusive Page 12 that was missing on nearly all of the other versions of the image floating on the Internet.
It turns out that Page 12 is dominated by stairwells with vertical, solid pale green backdrops. The two main buildings of the page are offset from each other by half a floor (see the image above), which must have created a conundrum for whoever had to label the floors. If the floors were numbered normally (1, 2, 3…) in each building (see Fig. 1 below), a person walking up the stairs would see Floor 1 of the building on the left, go up half a flight of stairs to Floor 1 of the building on the right, up the stairs to Floor 2 on the left, Floor 2 on the right, and so on. The floor numbers could be reconfigured to make more sense to a traveler of the stairwell (Fig. 2), but then then every other floor would be skipped in the numbering of each building. A solution that might be acceptable to both the building residents and stairwell travelers (Fig. 3) would assign subfloors (1a, 1b, 2a, 2b, or 2blue, 2red, etc) so that each stairwell landing has a unique identity without skipping any numbers. Or maaaybe they came up with an even better system.
But soon after celebrating the recovery of the missing Page 12, I noticed that the Amazon scan was missing the left side of the image that appears in the widely shared This Is Colossal image. In other words, there is no version of the cross section that is complete – they’re both missing a side. UNTIL NOW.
It certainly isn’t perfect, but I made a composite image combining the This Is Colossal and Amazon scans (below). The Amazon scan is lower resolution than the Colossal scan, so I had to shrink the Colossal image (losing some detail in the process) to get things to line up properly. In general, the Amazon scan is much cleaner, despite its lower resolution, so I used it as the bulk of the composite. I only used the Colossal scan image to fill in the missing left of the Amazon scan. The two scans also have very different color profiles and image qualities so I had to do some color matching and level balancing in Photoshop, and even then it’s a little weird. But the weirdness of it is in keeping with the history of this image, and to my knowledge it is now the only version of the Kowloon Walled City Cross Section on the internet that contains the complete scan.
I suppose that someone could just buy the real book it appears in and do a decent job of scanning it, but I no longer believe in miracles.
In the This Is Colossal article, another earlier blog post is credited as the source: a March 30, 2010 published on deconcrete.org. The deconcrete.org post features a different scan which is very similar to the Colossal image, but slightly larger, and revealing even more sloppiness. The scans are tilted at an angle, which makes white space appear at the borders. The Colossal image crops these white spaces out, which is also why it ended up cutting off some of the page numbers. My goal is to find out who did the original sloppy scan of the book which is the true origin of so many of the internet’s images of the Kowloon cross section. The deconcrete.org post references another earlier blog post (at zoohaus.net) as the source, but the website no longer exists.
Twice this week I found myself carefully crawling through a detailed digital landscape, searching for anomalies distributed throughout an enormous visual model of a real world location.
I will immediately click pretty much anything that has the word ‘gigapixel’ in it. As soon as I saw this tweet from Kyle McDonald late last night I knew what to do.
— Kyle McDonald (@kcimc) December 19, 2018
As defined by Noah Webster back in the early 1800s, “a gigapixel image is a digital image bitmap composed of one billion pixels, 1000 times the information captured by a 1 megapixel digital camera. A square image of 32,768 pixels in width and height is one gigapixel.” This Shanghai panorama is made up of 195 of those.
In other words, you can zoom in on distant buildings and people and still maintain a reasonably high level of crisp resolution. Many of us have experienced the thrill of doing this at some point, but what interested me about McDonald’s tweet was the focus on the anomalies at the seams.
The 195 gigapixel image is made up of numerous sub-images, stitched together. In the Shanghai Panorama, the various sub-images were captured at nearly the same time, but it wasn’t exact. On the “seams” of the images, where the edges meet, it’s possible to spot people that were photographed at two separate times, a few feet apart. This causes the same person to occupy two separate places in the full, aggregated panorama, with one standing on an edge adjacent to the other. Meanwhile, the “edges” of the sub-images are effectively invisible, as they can only be inferred from identifying anomalies in the image.
The experience of searching a giant image for these human doubles is very strange. I found in at once soothing and somewhat disturbing. Soothing to slowly float in and out of the daily lives of so many people unaware that this particular, inconsequential moment would be captured and held onto for so much longer, and soothing to pan the camera’s view along the streets and come across recognizable city inhabitants like street food cooks, sanitary workers, and businessmen sitting on stairs.
Somewhat disturbing to feel like a surveillance officer, although I assume that anyone stepping foot outside in Shanghai realizes that they are probably on camera somewhere. But still, the incredible detail on offer here, especially when looking almost straight down at the plazas below where the resolution is sharpest, naturally draws the mind to trivial investigative matters. Who was a tourist and who was walking to work, and how long had the group of five people been searching the ground for some lost item underneath or around a wheelchair?
Searching for doppelgängers at the seams brought me back to reality, somewhat ironically. McDonald took a few screenshots of the ones that he found, and I came across a few myself. The process of identifying them is not unlike Where’s Waldo. There are many false positives, like pairs of people wearing nearly the same thing or striding in unison, side by side. But at some point, a key piece of evidence emerges.
In the image below, it seems possible that two different people could be carrying the same flag, with similar plain black shoes and ponytailed hair. But the same blue streak appears on both of their backpacks, and thats enough for me. The large shoulder bag is easily noticed in the image at the top of this post, while the wristwatches and umbrellas rotated at exactly the same angle stand out in the image above.
On another side of the spectrum of image quality and representation, I was introduced to the idea of Cubist Google Earth via this Are.na channel. If you don’t know what Are.na is but you’re interested in the idea of a visually-oriented organizational, open-ended inspirational/creative tool, you should check it out. Far from the hyper-resolution gigapixel image of Shanghai, the 3D layers in Google Earth are typically distorted, glitched representations of the structures they are meant to represent. A glitch on a highway comes to resemble a civil engineering mishap in SimCity (or the Cities: Skylines simulation game, which is much better than SimCity but doesn’t have the Kleenex/Band-Aid style of brand recognition, perhaps because it has a colon in the middle of its title).
As the name indicates, many of the images found in Cubist Google Earth are made up of buildings that seem to be shown from several different angles at once.
Since the Cubist Google Earth channel is “open”, which means anyone can add links/images to it, I decided to go on a hunt of my own. I quickly headed to LaGuardia airport in Queens and found a mutated set of planes that reminded me of the Shanghai Panorama “seams” experience.
I have some more material related to this (a delightful evening spent looking for broken telephone wires in Google Map Streetview), but I’ll put a lid on it for now. I don’t have comments turned on for this website because it’s embarrassing when the majority of a website’s comments are spambots, but if you have your own examples of “seams” in panoramic pictures, Google Earth images, or elsewhere, I’d love to see them.
Base is a two minute loop is made up of several smaller loops. Each of the panels is a 30-second loop. The left panel is mostly Reagan, the right panel is mostly Trump. The subjects of the left and right panels occasionally “reach” toward each other by incorporating some of the facial characteristics of the subject on the opposite side. The center panel morphs between 75% Reagan / 25% Trump and 75% Trump / 25% Reagan, and is timed so that it “passes” the facial expression back and forth between the left and right panels. This makes it possible to seamlessly follow a morphing facial expression across the canvas without interruption.
I spent all weekend in this strange world, so I can no longer tell if any of the above makes sense to anyone else. The sound accompaniment reinforces the visual flow – the “collisions” of the facial expressions are matched by the movements of the drones. The collisions are also visually marked by the small dots and the tilting of the panels.
For those who have only one second to spare instead of 2 minutes, here’s a shorter loop derived from the video:
C-Span is great when the camera stays live during lulls in the political action, when events are about to begin, or after they’ve ended. The underlying footage here is from one of those times. Lately I’ve been getting into the idea of dramatizing the mundane. I tried to edit the footage using loops to create tension and highlight certain recurring features, like reflections of light or passing vehicles.
This video is meant to play on a loop.
I went through an interesting time earlier this Summer and stopped working on things for an extended period. It amounted to a pause of about two months, which doesn’t seem so long now, but at the time felt like the end of the world. But I am feeling much better now, and creatively refreshed as well.
Lately I’ve been into the idea of documenting mundane events and then creating many different “versions” of them. Written statement of the event, storytelling version, audio recording, video, embellished, the same event from another perspective, recreated physically again as a simulation for someone else to experience. I’ve also been thinking about how to blend different techniques I’ve developed in new ways.
This video (excerpt) is part of that larger project. It’s a 54 second loop. I think of loops as sharing certain qualities with memories. Imagine watching a 3 minute sequence a single time, followed by a 20 second excerpt from it on an endless loop. Over time, the memories of the full sequence would become weaker. The shorter looping excerpt, besides being the dominant object of immediate perception, would take on new meanings through its repetition. Any sequence of data that is looped becomes part of a larger pattern. This is true no matter how random or nonsensical the original sequence. Loops can be used to create a second-order perspective on time and data.
I cut down a 30 fps video to a series of key frames, then did some manual morph editing to create slithery connections between them all. I think of this as a combination of several different “versions” of the mundane event. I tried to approach the audio from a foley sound artist’s perspective. The dimensions (1080×1920, portrait orientation) are unusual for viewing on a computer screen, but I think of this clip as a smaller part of a larger physical project, where it will (hopefully) work better in context.
Conferencecall.biz has experienced a bit of a revival over the last few hours. Someone posted about it on Hacker News and it made the front page. These hackers are so nice!
There are over 100 people visiting the site right now, the most since the halcyon days of Slate and Marketplace radio interviews back in early 2014. So if you’re finding this site now through there, welcome. If this were 1999, I’d put one of those Under Construction GIFs up on a black background. But then I wouldn’t have made conferencecall.biz yet. Hmm. Makes sense.
For those in the Chicago area, please drop by the Rendr exhibition on May 2nd, from 5:30 – 8:00 pm. I’ll be showing an installation I’ve been working on for the last few months which uses a set of 5 Raspberry Pi (inexpensive computers) units, speakers, microphones, and displays. It also involves speech recognition, text-to-speech, custom dictionaries and language models, and Python. I enjoyed teaching myself all sorts of new things (for example, Linux and Python) in order to put it together. I’ll post some documentation (both video of the installation and code) afterward for those who can’t see it in person.