Java Software Technology

Java 10 HotSpot Disassembly on macOS High Sierra

Printing Generated Assembly Code From The Hotspot JIT Compiler documented back in 2013 how to view Java Hotspot generated assembly code.

While still useful, the disassembler plugin referenced in the post is no longer available in binary form as the Kenai project has been decommissioned.

A number of references are available on how to build the plugin, however information on how to build on current macOS systems is hard to come by. Here is how to build the disassembler plugin on Java 10.

  • macOS High Sierra 10.13
  • Xcode 9.3 (including Command-line Tools)

  • pointed out the requirement for binptils 2.26
  • was a good starting point
  • OpenJDK Supported platforms:
  • OpenJDK Sources:
  • java command line arguments:


Java Software

The Cost of Contention

Martin Thompson first reported on the cost of contention using a simple benchmark that measures the time to increment a 64-bit counter 500 million times using various strategies. Results were reported here (section 3.1) and here (Managing Contention vs. Doing Real Work).

I re-implemented this benchmark here.

The results I observed (running on Java 9 with a 2017 MacBook Pro with a 2.9 GHz 7th Generation Kaby Lake Intel Core i7 processor) are comparable to those reported by Martin 7 years ago.

Method Time (ms)
Kaby Lake, Java 10
Time (ms)
Single thread 70 300
Single thread with volatile 2,700 4,700
Single thread with CAS 3,500 5,700
Single thread with synchronized 2,000
Single thread with lock 9,300 10,000
Two threads with CAS 10,800 18,000
Two threads with synchronized 22,400
Two threads with lock 52,500 118,000

While this micro-benchmark is not representative of real-world workloads (as explained here), tempted by its simplicity I plan to use it as the first benchmark to track optimizations to the air-java concurrency library. This would be followed up by a more comprehensive benchmark like this one, which measure both latency  and throughput under various configurations, and finally a real-world application.

Gradle Java Kotlin Software

Gradle Build with Java 9 Modules and Kotlin

When starting a new Java project recently, I found it surprisingly difficult to setup the Gradle build with support for Java 9 modules and the Kotlin language.

For others who might find themselves in the same bind, here is a gist with the simplest, minimal gradle setup I came up with that includes:

  • A multi-project gradle build,
  • Java 9 modules support,
  • IntelliJ IDEA integration,
  • Kotlin language modules with support for cross-references between Java and Kotlin code in the same module.

Here is an proof-of-concept example of the above build scripts in action:

Software Technology

Cloud Storage Costs


Recently I did a survey of cloud storage options and their costs. My focus was to find the cheapest, scalable storage solution that I can use with minimal cost to begin with.

If you are starting a new mobile app project, without any seed funding, the best choices are still Google Cloud Datastore and Amazon DynamoDB. Both offer low per-operation and per-data costs and data replication without any fixed monthly costs.

A Note on Dynamo DB vs Cloud Datastore

If your application performs a lot of operations (reads/writes) over a relatively fixed-sized dataset, DynamoDB (with higher per-GB-per-month costs but significantly lower per-read/write costs) could be significantly cheaper. A company I worked at leveraged this difference to realize significant cloud storage cost savings by migrating from Datastore to DynaomDB.

Note: the following page is an excellent resource for those familiar with either Google Cloud services or AWS services to find out the corresponding service offerings of the other provider:

Cloud Storage Costs

Datastore (

No per node cost (bills per 100K reads/writes)

  • 6c per 100k reads
  • 18c per 100k writes

18c per gb per mo

AWS DynamoDB (

0.4c per hour minimum (for 5wps and 10 rps)

  • 0.4c per 100k reads (prorated RCUs)
  • 2c per 100k writes (prorated WCUs)

25c per gb per mo

Bigtable (

65c per hour per node (195c per hour for 3 node min)

17c per gb per mo (ssd)

Spanner (

90c per hour per node

  • 10K qps read per node

30c per gb per mo

Cloud SQL for MySQL (

19.3c per hour (13.51c per hour sustained use price) (38.6c per hour with failover replication) [2 CPUs, 7.5 GB RAM]

17c per gb per mo (ssd)



17.5c per hour (12c per hour for 1-year term) (35c per hour with failover replication) [2 CPUs, 8 GB RAM]

11.5c per gb per mo

Heroku Postgres (

27c per hour (pro-rated) [400 connections, 8GB RAM, 256 GB storage]




A couple of days ago, while planning a vacation trip with my girlfriend and contemplating spending some time on Christmas Island en route back from New Zealand, I stumbled upon these photos of the hydrogen bomb explosions, which were conducted off of the island in the late 50s.

Seeing these photos stirred something very deep in me.

I thought, all my life I’ve done my best to help people, to make the world a better place, and here they were. People building bombs and blowing them up, spreading toxic, destructive, radioactive waste across the entire planet.

I felt distraught. Watching the apparently calm, relaxed faces of these men filled me with disgust and disappointment.

A bunch of white men wearing white lab coats and white goggles on a white boat, looking out onto radioactive mushroom clouds like it’s some sort of spectacle. What were they thinking? Was this some kind of a game for them? Did they feel accomplished and good about their success? Did they feel like winners while watching their successful experiments bring ruin to our planet? Did it make them feel safer? Stronger? But most important of all… did they actually take a moment to meditate, to think about the legacy their work is leaving behind? Did they have a conscience? Did they have  a trace of humanity – a trace of color, left in their souls, wrapped up in lifeless white coats, on lifeless white military boats while watching life-obliterating white mushroom clouds of radioactive dust?

And then I remembered…

My career as a Software Engineer began at age 13 when I saw the first computer land on the desk at home. As soon as I saw it I knew – I had to learn how to program this. And also, I was filled with exhilarating joy at the next thought: this can help people! And so my journey began.

My first project was an Inventory Management system to help manage the inventory and accounting of a computer store, which ran maintenance-free for two years after I left for college, helping bring computers to people’s homes where they can learn and have fun in ways never before possible or imagined.

The next project, while still in high school, was a Service Management system to track service repairs of cash registers, helping run grocery stores  that bring food to people’s homes, and again running for years after I left for college.

Once there, I built the web sites of the student radio station, the student magazine, and numerous other student organizations – helping bring the student community together and improve student life. That period culminated with a new web site of the university itself. This ran (again, maintenance-free, backed by an easy-to-use content management system) for years after I left college, with the “New Students” link front and center on the home page where I put it (and where the Dean of the University asked the project manager to put it back after she presented a modified version  with the “New Students” link removed). Again, helping bring the experience of a university education to new students for years to come.

My first project for an American company was to automate the loading of thousands of antique book records from loosely structured documents. This helped make these books available for sale at an antique book auction website – where they could be discovered by people who need them.

Then I went on to build a live-auction bidding app that allowed people to participate in live auctions over the Internet years before a big online auction site shipped the same functionality. I built the website of the City of Vacaville, including a public transit trip planner for the city, years before the arrival of city transit navigation in the behemoth maps apps of today. I built numerous other web sites and apps, including the web site of a small esoteric San Francisco chocolate shop.

I even brought the live internet auction app to the bull pen for pure-bred livestock auctions during my time at Path-Wise Corporation, a startup founded and run for a few years by a third-generation rancher, turned lawyer, turned entrepreneur.

At Fujitsu Network Communications, I made countless contributions to software that manages fibre optic connections of big Telecommunications companies, helping people connect and share with loved ones via phones and the Internet no matter the distance.

During my short stint as a contractor at Apple, I optimized the product pictures on the online Apple store, helping people buy iPhones and iPads so they can learn and connect in new ways, and do so with less impact on the environment and the company bottom line.

At Webalo, I helped the company run its app faster and more efficiently on iPhones, aimed at helping people run their business more efficiently.

At Google, I contributed to Google for Work, helping people quickly setup e-mail and office productivity apps for their small business and enjoy new standards of security and collaboration.

At an online retail company, I helped the team build a more efficient software production pipeline and heard the heartfelt gratitude of the engineers, knowing I had helped their lives a tiny bit during my short stay.

At Snapchat, I did my tiny part in helping millions of people around the world experience the joy of expressing themselves freely in the moment together with friends… even on New Year’s Eve when we all pick up our phones at once.

And now I am about to join a company that works tirelessly to make cities more enjoyable to live in, while easing transportation (in a fun way) and reducing our impact on the environment. All at once, and executed in a mindful, responsible manner.

What is your legacy?


Software Technology

Product Management

As a Staff-level Software Engineer, this post by Joel Spolsky best describes my standard of excellence for Product Managers – mostly in terms of the degree of attention to detail and technical aptitude that I would expect from a self-respecting, ambitious Product Manager.

Even though Joel is talking about his experience as a Program Manager at Microsoft, most product managers I have worked with at Google and elsewhere function at least partly in the space of a Microsoft Program Manager as described here.

Software Technology

The Software Business

I was reminded today of a quote by Bill Gates I had read 6 years ago in then-Sun Microsystem’s just-ex-CEO, Jonathan Schwartz. Here it is:

The software business [is] all about building variable revenue streams from a fixed engineering cost base

This is from Schwartz’s Good Artists Copy, Great Artists Steal post, which is also very informative about how Software Patents are used in practice.

The above is an important definition for everyone involved in building software to keep in mind and never lose sight of.


Acting Classes in Los Angeles

I finally went over my notes today and pulled out all the Los Angeles Acting class recommendations we got from Deb Fink at her Audition Workshop in the SF Acting Academy last year. Here is what I uncovered.

Deb Fink had also recommended we watch the Catastrophe 2015 TV show. Not sure how this is related but it came up in my notes, I have been seeing ads for this show everywhere since I moved to LA, and the show is NOT on Netflix or iTunes.

Finally, there is the Baron Brown Studio for Meisner technique in Santa Monica, which a Google search uncovered before I had a chance to look over my notes.


WordPress on AWS


I had been running my WordPress blog on a shared hosting account at for the last eight years, when about a month ago I decided to consider alternative solutions. In particular, I wanted to see if I can lower the cost of running my site. Here was my starting point:

Web Hosting from $71.40 / year (paid in 2-year increments)
Domain Registration from $10.87 / year
SSL Certificate from (promotion) $1.99 / year

Given that my blog gets very low traffic, I decided to give AWS a shot. With microbilling, I thought I should be able to drive the cost of my hosting even further down from the already low $6.86/month at hostforweb (for comparison, a blog hosted on costs $8.25/month, billed yearly, and that does not even include google analytics integration, which I can easily add for free to my self-hosted wordpress site).

TL;DR; I was wrong. Hosting on AWS is more expensive with a lower level of reliability, scalability and higher management overhead.

Just the cost of running the t2.micro VM instance is $9.36/month, and that does not include the cost of the RDS instance and the ELB.

I will share the details of my experience in a later post.

Actors Personal Presence

On Presence

Every time we hit a wall in training it is because we stopped being present. One can be performing an activity better than 99.9% of the world population and still be completely asleep while doing so. As long as presence is maintained there is no limit to how far we can extend ourselves in any endeavor.

It is also perfectly possible for someone to be fully present in one activity, say on the ski slope, and at the same time be completely lost in another, say a business meeting. And vice versa. So it becomes tempting to pursue presence by constantly engaging in new thrills, new activities. While helpful, this is not necessary, and it holds the same danger as any singular pursuit. Sooner rather than later you start picking up new activities in your sleep and once again have lost your presence. The path to presence is constantly looking inward while doing whatever it is you are doing in every moment of every second of every day. And that is the hardest thing of all and it pays absolutely nothing and leads absolutely nowhere.

How to look inwards? The standard recommended actions are the silent repetition of a prayer or mantra if you have one, or putting your attention on your breath if you don’t. Put your attention on your breath as a passive observer. Do not try to change your breath. And if you do, do not try to stop yourself. Simply observe yourself changing your breath. Continuously disengage from whatever action you find yourself drawn into and observe. Like a loving mother who watches over her baby crying in her lap without engagement or affect.



Ali Saif describes a state of mind, which has been predominant in my life, so well in The Plunge and Surface:

“The sharp awareness of the present-moment and spontaneity of emotional response is lost, made sluggish rather. I often find I smile at something a microsecond too late and then remain smiling while others have moved on.”

Except for me the “sharp awareness” was never there to be lost in the first place. It was there to discover in transient moments imbued with a sense of enlightenment.

And then there is Ali’s conclusion, which came to me like thunder from a clear sky:

“Quite understandably it leads to negative-spiral thought process and frustration.”

This has never been “quite understandable” to me. Maybe there really is a connection and I never saw it. Negativity and frustration are certainly far from foreign to me.

“I’ve noticed that the faculty deep thought, if you will, has remained intact through all of this.”

This on the other hand has always been clear as blue skies to me. Deep thought does not just remain. It is the only thing left in the house and gets full reign. This kind state of mind is the great enabler of deep thought. And the kind of “flow” that has always been the conductor of my most productive computer programming work.

I read an article on mindfulness once that associated mindfulness practices like meditation, which enhance awareness of one’s surroundings and ability to stay in the moment, with the infamous “flow” credited for hyper productive engineering work and such.

Based on my experience the article got things completely backwards. Computer work “flow” is exactly the opposite of mindfulness, and very much like the plunge that Ali describes.

Do not get me wrong. Mindfulness can result in a hyper increase in concentration and focus of attention and I have experienced this. Only not in a way that is conductive to computer work in flow.

I look forward to the day I get to experience computer work in a hyper productive mindful flow. For now all I can say is that computer work in flow quickly induces a “plunge” which can easily become a steady state of being. I am in it and I am surrounded by it in Silicon Valley.


Random Thought

Here are some tips and hints for success.

  • Keep code reviews short;
  • Follow style guides to the letter.

Dart vs Java (cont'd) — Richards and Tracer

This week I managed to port the rest of Dart’s benchmark_harness examples to Java.

The experience of porting Richards and Tracer was as smooth as that of porting the DeltaBlue benchmark. The only unfamiliar (and interesting) Dart feature I encountered that is worth noting was the ability to declare and pass method parameters by name.

Here are the numbers:

Richards Benchmark

Tracer benchmark

The results this time are limited to two recent nightly releases of the Dart SDK (22577 and 22720), and 32-bit and 64-bit Java. I ran each benchmark 3 times to warm up, and then 5 more times and took the best time of the 5 runs as the final number you see on the charts.

I did a lot of experimenting based on feedback I got on the Dart mailing list. I am well aware that Java needs to run a method some number of times before the JIT kicks in, and of the caveats of OSR. However, in my tests I found no substantial differences when running the benchmarks with a longer warm-up time.

Java 8 results were not different enough to warrant inclusion in my tests. I test both Java and Dart VMs with default settings, and do not intend to tweak custom VM flags in order to optimize each VM. It is obvious that a lot can be done by tweaking various VM parameters. My goal here is to get a gut feel for the relative out-of-the-box performance.

I was told that for a fair comparison Dart should be evaluated against the 32-bit client JVM, as the Dart VM is also optimized for use on client devices (with more focus on things such as faster startup vs long-term throughput). Hence, I include the 32-bit Client JVM in my tests. However, for all practical purposes, 64-bit JVMs are more relevant and in-use nowadays, so I feel obliged to still include the 64-bit server JVM in my tests. There is no client version of the 64-bit JVM by the way. To be fair, 64-bit compilation does have the advantage of access to a much larger set of registers, which can be used to gain performance.


Dart vs Java — the DeltaBlue Benchmark

As of the time of this writing the performance page on tracks Dart VM performance as measured by the DeltaBlue benchmark.

I ported the benchmark_harness Dart package (including the DeltaBlue benchmark) into Java and ran against the latest Java 7 and 8 JDKs.

The experience of translating Dart to Java was surprisingly smooth. Some of the most common small porting tasks included

  • Dart bool to Java boolean;
  • Dart C++-like super call syntax;
  • Dart constructor syntactic sugar;
  • Dart shorthand (=>) functions to Java full format;
  • Wrapping Dart top-level functions and variables inside a Java top-level class;
  • Changing the use of the Dart Function type to a Java Runnable;
  • The Dart truncating division operator ~/, which apparently is equivalent to plain division (/) when applied to integers;
  • Dart list access [] operator to Java List.get()

The trickiest part of the translation was the following piece of code that appeared absolutely befuddling at first sight:


As it turns out, this is simply an array literal


prefixed by a generic type parameter specifying the type of the elements in the list


and followed by the list access ([]) operator, getting the element of the list at index value:


After working my way through this, the translation went smoothly, until I got to run the benchmark and hit a NullPointerException. In DeltaBlue, the BinaryConstraint constructor calls the addConstraint(), which is overridden in its subclasses. The ScaleConstraint sublcass implementation of addConstraint(), in particular, accesses ScaleConstraint fields that are initialized in the constructor. This pattern works in Dart, where apparently “this” constructor arguments are stored in their corresponding instance fields before the super constructor is invoked. Since this is not possible in Java (the super call must be the first statement in the constructor), I moved the addConstraint() call from BinaryConstraint to each of the subclass constructors. With that fix, the port was complete and I was able to run the Java version of the benchmark.

Here are the DeltaBlue numbers for Dart and Java on my ThinkPad W510:

Dart (22416)    2,810.39    355.82
Dart (22577)    2,283.11    438.00
Java (1.7.0_21-b11)    2,728.51    366.50
Java (1.8.0-ea)    2,693.14    371.31
Java (1.7 32-bit)    3,555.95    281.22

Update 5/11 More numbers: running for 45 seconds improves the performance of the 64-bit JVM (1.7,45s) but not the 32-bit one (1.7 32-bit,45s); the 32-bit Server JVM (32-server,45s) performs just as fast as the 64-bit JVM; the xxgreg version (xxgreg,45s) of DeltaBlue runs slower on the 64-bit JVM than my version ported from Dart; the xxgreg benchmark (xxgreg-run) uses a different harness and measurements include VM startup and warmup time.

Java (1.7 32-bit,45s)    3,533.99    282.97
Java (32-server,45s)    2,701.67    370.14
Java (1.7,45s)    2,559.38    390.72
Java (xxgreg,45s)    2,780.61    359.63
Dart (xxgreg-run)    2,356.70    424.32
Java (xxgreg-run)    2,800.10    357.13

The number in the first column is the runtime in us as reported by the benchmark harness at the end of a run. The second number is the score as defined on the performance page: “runs/second.” I ran the benchmark on each VM multiple times and as the variance between runs was small enough I picked the result from a random execution for each VM.

The first Dart VM (22416) is the current public release available on the Dart website, while 22577 is the current nightly build. I included the nightly build, as it is clearly visible on the performance page that Dart saw a major gain in performance as of build 22437. My test confirmed this observation.

The results are truly impressive. Dart, still a baby at 2 years of age and pre-1.0, already exhibits 15% better performance than Java, a veteran of 18 years. I think this truly deserves to be called a case of David vs Goliath.

Update: Both Dart VMs tested are 32-bit, while the two original Java VMs are 64-bit. Tested with the 32-bit Java 1.7.0_21 VM with even more disappointing results.


The History of Many-core

When looking for a good reference to back the “many-core problem” assertion in my Master’s thesis, this the best I could find as a prime source. Multicore: Fallout of a Hardware Revolution holds an excellent description of the reasons behind the shift from increasing clock speeds to multiplying the numbers of cores in modern CPUs.

In particular:

“Hidden concurrency burns power
 Speculation, dynamic dependence checking, etc.
 Push parallelism discovery to software (compilers and
application programmers) to save power”

…and a hidden treasure of information on the history of all modern processor architecture optimization techniques.

Actors Newspeak Thesis

Value Objects in Newspeak

This is a quick dump of a rough design sketch for Value objects in Newspeak, which builds upon section 3.1.1 of the current version of the Newspeak language specification.

  1. Value classes allow explicit intent. The class declaration is automatically annotated with metadata that expresses the intent for instances to be value objects.
  2. Value classes use special syntax that introduces the said metadata annotation (e.g. valueclass X instead of class X).
  3. Value classes can only be mixed in with other Value classes.
  4. Value classes can only have immutable slots.
  5. The root of the value classes is Value, which extends from Object. The Value class overrides the ==  method and delegates it to =. The Value class overrides = to compare all the slots recursively using =. The Value class overrides the asString method to give a neat stringified representation of the Value object in a JSON-like format. Value class computations for =, asString bottom out on built-in Value classes, like Number, Character, String etc. (overriding = and asString is explicitly inspired by the behavior of case classes in Scala).
  6. The Value class overrides the identityHash method to delegate to the hash method, and overrides the hash methods with some simple, yet-to-be-determined, recursive hashing algorithm (e.g. XOR-ing the hashes of all the slots).
  7. Value objects can only point to other Value objects.
  8. Value class declarations can only be nested inside other Value class declarations.
    Update 2/10/2012: Another option that seems very attractive right now would be to allow value class declarations to be lexically nested inside non-value class declarations but cut off from the non-value part of their lexical scope (the enclosing object chain stops at the outermost value class, excluding all enclosing non-value classes).
  9. This implies the enclosing object of a Value class is always a value object.
  10. Simply annotating a class declaration as “<Value>” is not enough. Syntax is required for valueclass declarations in order to ensure that Value classes always extend other Value classes. This allows a Value class with no explicit superclass clause to implicitly extend the built-in Value class, instead of Object, which is the default superclass for regular classes.
  11. The constraints on Value objects and Value classes are verified at mixin application time (the superclass is a Value class), and object construction time (all slots contain other Value objects).
  12. The enclosing object does not need to be verified at mixin application time, because the enclosing scope of a Value class declaration can be verified at compile time.
  13. Value classes are also Value objects.
  14. nil is a Value object.
  15. Value class declarations can contain nested non-value (regular) class declarations. More generally speaking, Value objects can produce (act as factories for) non-value objects.
    Update 2/10/2012: An important corollary of the above is that non-value classes enclosed in a value object are value objects themselves.
  16. Value objects are awesome! They are containers for data and the unit of data transfer between Actors in Newspeak, and also the building block for immutable data structures.
  17. Update 11/24/2011:
  1. Every class whose enclosing object is a value object is also a value object (but not necessarily a value class!).
    Update 11/27/2011:
    Justification for the above is: if multiple equivalent instances of a value class are indistinguishable, then all of the instances’ constituent parts, nested classes included, must be indistinguishable as well. Think (a == b), but (a NestedClass == b NestedClass) not – this is unacceptable!
  2. We must determine rules for when closure and activation objects are value objects, so we can safely deal with simultaneous slots in value classes (at construction time, the closure object that captures each simultaneous slot initializer must be a value object, then at lazy evaluation time, the result must be a value object, otherwise an exception is thrown and the simultaneous slot is not resolved).
    Update  2/10/2012: One alternative that comes to mind but does not seem very attractive would be to have special syntax for closures that are value objects, say {{ … }} denotes a closure that is always a value object and has no access to enclosing mutable state.
    A more attractive alternative would be to extend the syntax for object literals to support value object literals. All of a sudden object literals appear much more important than before. For example, value-object closures and/or object literals make it possible to build  a Scala-like parallel collections library on top of actors.
    Actually the above is not quite correct: a Scala-like parallel collections library in newspeak would benefit more from value class literals that can be nested inside non-value classes
Actors Thesis

References and Actors

In E, references are distinct from the objects they designate. This might seem apparent, but it is not necessarily so. In traditional languages like Java, first-class references are almost indistinguishable from the objects they designate. They are internally represented as 4- to 8-byte pointers and while there is a distinction between reference equality (two references pointing to the same object) and object equality (two distinct objects with identical/indistinguishable contents and behavior), there is not much else to worry about.

In E, however, the rabbit hole goes deeper. There are multiple *types* of references. The word type might be a misnomer but I find it as one good way to think about references. In the E thesis, there is no discussion of types of references, but rather states that a reference can be in:

  • Local Promise
  • Remote Promise
  • Near Reference
  • Far Reference
  • Broken Reference

In this discussion reference type is a synonym for reference state. The reason the term state seems more appropriate is that a single reference goes through several transitions between states in the course of its lifetime. In other words, a reference can switch types.

The problem this poses for an implementation is that references in different states hold different information. A near reference is the simplest case, the familiar reference from Java – it holds the address of an object within the current VM’s heap. A promise however, holds an unbounded list of pending messages and whenResolved listeners. A far reference holds whatever information is necessary to transmit messages to its target object, including potentially a distinct queue of messages pending delivery. A broken reference holds exception information regarding the reason for the reference breakage.

Classes are a natural way to think about implementing each different state of a promise – the information and behavior for each state of a reference is represented by a distinct class. The problem arises when the reference needs to switch states, therefore the need arises for an object to change its class dynamically, which is not traditionally available functionality in object-oriented programming languages.

Another problem is the possibility that references might chain. In other words, the possibility that a reference might point to another reference, instead of directly pointing to an object. A promise might get resolved to another promise. Or, even more disturbingly, a far reference might point to a promise reference. This possibility of chaining is actually excluded in the reference states model presented by Miller. Instead of a promise resolving to another promise, the promise reference simply makes a state transition, or in other words, the reference becomes the other promise, instead of pointing to it. In a similar fashion, a promise will get deserialized as a promise for the same result as the original promise, instead of being deserialized as a far reference to a promise (which would introduce chaining of references).

This, in essence, leads to an important conclusion. The serialization/deserialization implementation must include special handling logic for references in different states. For instance:

  • Near reference might have to be deserialized as a far reference
  • Near promise is deserialized as a remote promise linked to the same resolver as the original near promise


Furthermore, the resolution logic must handle the distinct cases as well, in order to handle the different state transitions possible that originate from the promise states:

  • Become another promise (for a new result)
  • Resolve to and become a far or near reference

As already discussed, the most natural way to implement the different reference states is as classes. At this point the notion of reference states as types comes into the picture. References are objects in our runtime VM but they are a distinct type of proxy objects that provide special services and require distinct treatment from regular application objects. As explained above, for deserialization and resolution purposes, we need to be able to distinguish between a near reference to an application object and a near reference to a reference object (since fundamentally the runtime VM only provides primitive support for near references, and the other reference states are reified as regular objects). Furthermore, it is clear from the examples above that we also need to be able to distinguish between the different *types* of reference states (Local Promise vs Remote Promise vs Far Reference etc).

Since in Newspeak, there is no global namespace for classes, and at runtime all classes are simply a dynamic aggregation of mixin applications, we cannot test the class names of objects. Conceptually this would be equivalent to having some sort of type system anyway. But this is exactly what we need – to be able to distinguish between different types of objects (one per reference state), albeit a very restricted set.

Since the Past and Actors libraries are a core part of the language, I propose to meet the need for type checking using the following idiom, which is slightly different from the is* message idiom for arbitrary objects, already implemented in the NewseakObject doesNotUnderstand: protocol. The idea is that since the Past and Actors libraries are singleton modules and the sole managers of instances of objects that represent reference states, and not likely candidates for extension by applications, we can simply test for class equality like this:

(obj class = Promise)

where obj is an object whose type is being tested and Promise is a reference to the class instance local to the current singleton module instance. Naturally,

Promise new

is exclusively used to construct Promises from within the Past module, for example.

Personal Technology

On Openness

I am a firm believer in openness. That is the reason I believe open source has such great value. The way I see it, the word open in “open source” does not just refer to the source code: it also means open communication, open structure, open management… openness in every aspect of a project.

Yet, in one of my own projects I failed to abide by my own principle. Two years ago, at the end of the summer of 2008, I left my small GSoC project – a split editor for Eclipse, in the state of a working prototype to begin a full-time job and join a master’s program. For two years now I have neglected the split editor project and kept in complete silence to the point that people have even forgotten that there was ever anyone involved in this effort.

I am now nearing the end of my master’s program and with all classwork completed am getting ready to begin work on a thesis. Before I do that, however, I wanted to clarify the state of affairs of my split editor work. I have not forgotten about it, and I am determined to complete it eventually. Although I will not have any time to dedicate to this work for another year, it is first on my queue of side projects after completing my master’s.

Actually, if anyone is willing to pick up the work now, I will be more than willing to provide whatever support I can. Here are the patches with my latest work, updated against the current Eclipse release as of the time of writing – 3.6.1 (R3_6_1 label in CVS):

The file contains four separate patches for the four Eclipse plugin projects involved in the split editor implementation:

  • org.eclipse.ui.workbench – this project contains the bulk of the split editor work.
  • org.eclipse.ui,
  • org.eclipse.ui.editors,
  • org.eclipse.jdt.ui – the above three projects contain mostly configuration changes to activate the split editor for Java and Text editors.

To see the split editor in action, check out these four projects from the Eclipse CVS repo (at label R3_6_1), apply the patches and start up an Eclipse launch configuration. If you want to try this but get lost or none of this makes sense, post a comment here and I will be happy to provide more detail.

I have always wanted to make it very easy for people to try out and experience the split editor at the earliest possible stage of its development (at which it stands currently – there are quite a few known bugs). The best way I see for this would be to share a custom build of Eclipse with the split editor work compiled in. Unfortunately, I have never been able to successfully build Eclipse from source. I gave it a shot two years ago, and more recently, I spent the last two months frantically trying to build the Eclipse 3.6/3.5/3.7 SDKs, without any success. It seems like I am not alone in this. If at any point fortune strikes me, you can be certain that Eclipse packages with split editor support will appear here immediately. If anyone is willing and able to help with this, please get in touch!



Open Source food

If there is such a thing as Open Source food, then this must be it.


Integrated split editor prototype

A new split editor prototype that is integrated into the Eclipse workbench API has been available as a patch on the Eclipse Bugzilla for some time now. Unfortunately, I have not able to create an easily deployable plugin that can be installed easily in any Eclipse 3.4 distribution (by copying to the dropins or plugins directories). Even if I did I would probably have to worry about the legal aspects of redistributing Eclipse code, since the plugin will no longer only contain my code.

That said, you can test out the latest split editor by checking out the o.e.ui.workbench project in Eclipse, applying the patch from bugzilla, and debuggin Eclipse from Eclipse. There are no immediately visible changes, but it would be very helpful to have as many people as possible test the split editor in everyday use. I do not expect a lot of users to go through the above procedure, however, and so I will keep trying to get an easily installable package out.

Next time I will talk about the internal integration design of the split editor because I believe this is the most interesting part of all. This will be of particular interest to Eclipse plugin developers who want to know how to enable editor splitting for their custom editors.