Hi Maria,

Their object to SignWriting seems to be that people 1) computers can't animate it and 2) most people don't read it.

In terms of its suitability as a candidate for use in an [Example Base Machine Translation] system, SignWriting lacks the explicit linguistic detail necessary for the generation of signs using an avatar.

This is false.  You can check out the VSign project from 2004:
http://vsigns.iti.gr:8080/VSigns/index.html

The 2-dimensional nature of SignWriting is easy for a human to understand, but difficult for a computer.  It is possible to animate simple sign using only the 2-dimensional layout of symbols.  For more complicated signs, it is possible to utilized the SignSpelling Sequence to order the action, position the symbols, and add extra information when needed. 

Annotated corpora on the other hand have the potential to carry varying degrees of granularity of linguistic detail, therefore bypassing the need to translate using SignWriting and then deriving such details from the resulting SignWriting symbol.
I'm not sure why they see SignWriting as an intermediate step.  The paper clearly states that documentation should be provided in a person's native language so that they can read it in their native language.  Watching a video is not reading.  Handing out a piece of paper is not the same as requiring a computer terminal.
Another issue with SignWriting is that the majority of signers are unfamiliar with it which lowers its appeal for use as final output translation.
This may support the idea of including the animation in the beginning, but it does not negate the need for written material for people to read.

Regards,
-Steve