Tuesday, December 25, 2012

Globally configurable wiring alternatives in CDI

The problem

Among the shortcomings of CDI (1.0) as a general purpose DI container is the lack of what Spring 3.+ users know as profiles (not to mention Guice, which is much more expressive due to its Java EDSL approach). The CDI feature which looks closest at a first glance is the concept of alternatives, quickly turns out to be utterly useless: You can annotate your beans directly (or indirectly, via stereotypes) as @Alternatives and then choose among them for one bean-archive (read "jar") only by selecting them in the META-INF/beans.xml file.  So, there is no way to switch between wiring-profiles without repackaging (possibly all) the jars in the deployment-unit. CDI 1.1 improves very slightly on this by allowing a "global" selection in only one of the "beans.xml" which is still far below par.

The implementation-selector pattern

My currently favored work-around consists of the following steps
  • define an enumeration of profiles
  • define a qualifier annotation referring to one of those profiles
  • annotate service alternatives with the above profile-qualifier
  • define a producer selecting the appropriate profile programmatically, publishing it to "@Default" injection points.
In some more detail:

Profile enum and annotation

Let's start with the annotation: The Enum's slightly more interesting, because it sports a reference to an annotation literal which comes in handy later, when we select the right instance:

Annotate your service alternatives

Now we can annotate our service alternatives like
@InProfile(IN_MEMORY_DB) public class SampleSvcForInMemoryDb implements SampleSvc { ...

Define the implementation-selector

This is just a bean containing a producer method like this: Thanks to the @Any annotation, it gets all available instances injected. It then selects the appropriate one for the activeProfile, which is a member variable in this sample, but can, of course be any method-call. Unfortunately, CDI does not allow a useful generification of that pattern, as far as I could see. It's still inferior to what other frameworks offer, but if CDI is a set standard, it's usable.

Tuesday, April 10, 2012

Autocompletion support for Scala's combinator parsers

Recently, I came to revisit Scala a bit more in depth -- compared to the odd script or hack I did before. Now, things at work change a bit: some green-field stuff to be done, even calling for some kind of DSL -- ok, maybe not really calling, really loudly but close enough to warrant spending some quality playing-time with my favorite "next great programming language".
So I threw together some combinator parsers which was some fun in itself and most of it felt very good. Except there was no autocompletion. The envisioned DSL should, among other things, navigate a mildly complex domain model. But, in contrast to a generic expression language, typos in property-names should be caught at the syntax level, and possible completions should be displayed when the user types the start of a property (or a dot). What I wanted to do was something like the made-up example below:
Of course, in the real DSL, the properties would not be hardcoded but reflectively collected from the model. The missing key-feature here is the CompletingParser mix-in: It doesn't come with the standard Scala library and I didn't easily find a nice solution readymade. So, that looked like a good opportunity to take Scala's extensibility for a test-drive that's both, interesting and easily within a Scala-beginner's reach
The basic ideas are
  • override the Parser class' alternative combinator "|" to make it collect possible completions in the ParseResult instead of just returning a Failure
  • override the implicit factory method for string literals to produce an alternative literal parser which differentiates between an unexpected and a missing character
As it turns out, in the standard implementation, the | combinator just delegates to ParseResult's append method which normally discards all results but the successful one, or, failing that, returns the failure which consumed most of the input stream. So we just need to replace the Failure parse result with a version that's able to keep track of possible completions, rather smellily named MissingCompletionOrFailure. This plan led me to the following little trait:
Now a simple command-loop to take our work for an interactive test-drive looks like this:

Tuesday, March 27, 2012

Pleasant surprise: JUnit supports Design by Contract

I've long felt contract envy of languages like Eiffel supporting Design by Contract. Only recently, I stumbled over the (not so) recent addition of theories to JUnit. Once the object instances under test are separated from the test-code - now taking on the form of a logical implication, making the latter re-usable is the obvious next step. Ever been tired of writing tests for your value-object's equals and hashCode methods? Try this:
/**
 * Provides JUnit theories that check, whether an arbitrary class 
 * satisfies the Object interface contract regarding the methods 
 * equals and hashCode. Instances of the class(es) to be tested 
 * have to be provided in derived classes 
 * 
 * @author scm
 */
@RunWith(Theories.class)
public abstract class ObjectContractTheory {
    
    @DataPoint
    public static final Object INSTANCE_OF_UNRELATED_TYPE = new String();
    
    @Theory
    public void equality_is_symmetrical( Object o1, Object o2) {
        assumeThat( o1, equalTo( o2 ) );

        assertThat( o2, equalTo( o1 ) );
    }

    @Theory
    public void equality_implies_equal_hashCodes( Object o1, Object o2) {
        assumeThat( o2, notNullValue() );
        assumeThat( o1, equalTo( o2 ) );

        assertThat( o1.hashCode(), is( equalTo( o2.hashCode() ) ) );
    }
        
    @Theory
    public void identity_implies_equality( Object o1, Object o2 ) {
        assumeThat( o1, sameInstance( o2 ) );

        assertThat( o1, equalTo( o2 ) );
    }

    @Theory
    public void any_object_is_unequal_to_null( Object o1 ) {
        assumeThat( o1, notNullValue() );
        
        assertThat( o1, not(  equalTo( null ) ) );
    }

    @Theory
    public void unrelated_classes_are_not_equal( Object o1, Object o2 ) {
        assumeThat( o1, notNullValue() );
        assumeThat( o2, notNullValue() );
        assumeThat( o1,  not( instanceOf( o2.getClass() ) ) );
        assumeThat( o2,  not( instanceOf( o1.getClass() ) ) );
        
        assertThat( o1, not( equalTo( o2 ) ) );
    }


}


Then to make sure that Integer fulfills that part of the object contract, you would simply write:
public class ObjectContractTest extends ComparableContractTheory {    
    @DataPoints
    public static final Integer[] INTS = {1,2,3,4};
}
Which is hardly to much to ask, even if creating your really interesting objects were slightly more verbose.

Sunday, January 17, 2010

Generics Gotcha: Intransitive Inheritance from self-referential types

The setup


I was used to the assumption that type inheritance in Java is transitive, that is B extends A and C extends B implies that C is a subtype of A as well as of B. As long as A,B and C denote straight classes, this is true to the best of my knowledge. But as I, admittedly only recently, learned, once generic type expressions come into play it gets a bit more interesting:
Let's make A a self-referential generic type like
interface A<T extends A<T>>{T x();}

The problem


Suppose, we'd now want to introduce some parallel class hierarchy depending on our A,B,C hierarchy, like this
interface D<T extends A<T>> ...
class E implements D<B> ...
class F implements D<C> ...
. Now, the compiler complains about the definition of class F, claiming that C is not a valid substitute for the parameter T extends A<T> of type D. Now, this seems funny: Type B is allowed while type C which is derived from B is not allowed in the very same type expression with the same upper bound A. After some thinking, this makes actually sense. What's the purpose of having a self-referential type like A? Probably, to have some method or attribute which is defined using the type of intended subtypes (as "x" shown above). Now, for the first sub-type, B, the type parameter T would be bound to B itself and in x's implementation the return-type would naturally be B. The same would, of course, hold for any sub-type of B, i.e. also for C. That, in turn, means that C is not a legal direct implementation of A! C is not self-referential any more, because its inherited member x still returns a B.

A resolution


Once the real cause of the problem is clear, the solution is easy: The definition of D needs to be changed to allow A's type parameter to refer to an arbitrary super type of D's parameter
interface D<T extends A<? super T>>
Now, E and F compiles just fine as defined before.

Saturday, November 14, 2009

Spring AOP: Excessive startup time for "perTarget" advice

Well over 300,000 Class.forName calls during context-initialisation were the result of setting up a simple "perTarget" advice for a 400-something beans application. With the standard "per class" instantiation model, the overhead went back to "normal", returning the Hibernate SessionFactory to the top of the list of startup resource-consumers. From a quick look at the profiler output, AspectJ's Type-Worlds are re-built over and over again while evaluating pointcuts - funny.

Saturday, February 23, 2008

Powerbook survived HD-ectomy

My beloved "old" (3.5 years) powerbook (G4, 15", 1.5 GHz) started to make humming noises, especially when tilted. A couple of weeks later there were sudden hangups, it would only restart after given some cooling time.
That it finally survived my amateur-surgery - albeit somewhat battered - let me immediately regret my fancying a new macbook. Some notes:

  • I followed this guide

  • I did need to poke into the dvd-drive-slit. I found a corkscrew useful for that - Just don't apply too much force unless you actually love that slightly battered look ...

  • As replacement, I chose the Hitachi Travelstar 5K160, 5400rpm, 8MB, 2.5", 120GB, P-ATA. In Switzerland it can be had at 107 SFr (e.g. at digitec ). Subjectively it's slightly louder and noticably faster than the original.

  • When re-assembling, don't absent-mindedly put screws into the holes meant for DVI-plugs. You'll never get them back out. But, anyway, there are plenty of screws, and even with one lost in the plug-hole, it won't fall apart (I hope).

Monday, June 04, 2007

Tapestry Components in Scala

Recently, I became interested in Scala: Multiple inheritance, good support for functional programming, a nice syntax, to mention the greatest highlights.

As a first exercise, I tried to write some Tapestry-4 components in Scala to gain some real-world experience, and to see whether Tapestry's pretty extensive use of bytecode-generation would somehow break Scala's Java-compatibility. I was pretty pleased with the results:

The following is a trait to add authorisation support to arbitrary (form-)components. It controls whether to render the component into which it is mixed in and binds its disabled parameter.

package ch.marcus.components;

import org.apache.tapestry._

trait AccessControlled extends AbstractComponent {
object binding extends IBinding {
def getObject = Boolean.box(getIdPath.contains("readOnly")) // TODO: delegate to ac-service
def getObject(c: Class) = getObject
def getDescription = "AccessControl disabled-binding"
def setObject(o: Object) = {}
def isInvariant = false;
def getLocation = null;
}

override def finishLoad = {
setBinding("disabled", binding )
}

/**Derived components delegate to the desired renderComponent method here: */
def renderAccessControlled ( w: IMarkupWriter, c: IRequestCycle )

override def renderComponent( w: IMarkupWriter, c: IRequestCycle ) {
if ( ! getIdPath.contains("invisible") ) // TODO: delegate to ac-service
renderAccessControlled(w,c);
}
}


Here is how to derive an access-controlled version of the standard TextField component by inheriting from TextField and the trait we just defined:


package ch.marcus.components;

import org.apache.tapestry._
import org.apache.tapestry.form._

abstract class AuthorisedTextField extends TextField with AccessControlled {
//duplicate from scala.Object to make Tap's class-enhancer happy
@remote override def $tag(): Int = 0

override def renderAccessControlled( w:IMarkupWriter, c: IRequestCycle )
= super[TextField].renderComponent(w,c);

}

The only gotcha is the override of $tag which seems to be necessary due to a glitch in Tapestries class-enhancer. Oh, and for components with a specification(.jwc)-file this must be copied for the derived component. This is slightly annoying, but unnecessary for component that consequently use annotations (specless components).

This is how a Tapestry page looks like in Scala:

package ch.marcus.pages
import org.apache.tapestry.annotations._
import scala.reflect._

abstract class Home extends ScalaPage {
val text = "Hello, this is Scala."

@Persist
def getMbr : String
def setMbr( m:String )

def onSubmit = {
setMbr (getMbr + "x")
}
}

Note, how clean the code looks without all the syntactic noise you'd have to add in Java and how nice the Tapestry annotations (Persist) work with Scala.