The Author Online Book Forums are Moving

The Author Online Book Forums will soon redirect to Manning's liveBook and liveVideo. All book forum content will migrate to liveBook's discussion forum and all video forum content will migrate to liveVideo. Log in to liveBook or liveVideo with your Manning credentials to join the discussion!

Thank you for your engagement in the AoF over the years! We look forward to offering you a more enhanced forum experience.

import-bot (20211) [Avatar] Offline
#1
[Originally posted by pauldb]

Hi,
I'm using the TargetDecorating example - the WRM drag works really nicely.
However, in addition to mouse drags, I want the user to be able to position
objects by entering coordinates into text fields.
I would like your advice on how best to do this so that they may subsequently
drag the object from the position previously specified in those text fields.

My first attempt has entailed getting the target node of the appropriate
actuator plug-in (the actuator being used in the drag namely
slide.getTranslation()), casting it as a TransformGroup and setting its
transform manually.

Doing this moves my object to where I want it but when I start dragging it
again, the object jumps back to where it was before I manually moved it. I
imagined that calling syncActuation on the actuator used for dragging would
overcome this - because doesn't that synchronise the actuators knowledge of
where the target node is with the actual value? - but, unfortunately that has
not proved to be the case. I've tried using the updateActuation method to
point at the manually input location aswell - all to no avail.

Ok, so you may argue I should use classes from within the framework but I'm
afraid that my understanding of the framework is not sufficient for me to know
where to start. Do I need a separate actuator for my textfield data?

I would be grateful for any help.
Thanks,
-Paul
import-bot (20211) [Avatar] Offline
#2
Re: Actuators & syncActuation
[Originally posted by author]

The book and javadoc warns that you must use the framework to manipulate
actuation state otherwise results are undefined. This is because the
framework must maintain the full or partial actuation state. Depending on the
actuator, the state resulting from direct manipulation of the transform will
be overwritten by the framework, as you have observed, or it will throw things
completely out of sync.

The distinction between init, update, and syncActuation() can be a subtle one.
Take a look at page 337 of the book. Essentially, init sets the absolute
"reference" state; update updates the state relative to the reference state;
and, sync sets the reference state to the current absolute state, which can be
derived from the current init plus updates, or from the transform itself
(which is used depends on the actuator implementation). As such, there is no
one way to do what you want. You can let the user enter deltas relative to
the reference established by initActuation(), or you can let the user enter
absolute values, which you set with initActuation().

First you have to decide if manual entry is absolute, or relative to the last
drag or manual state value. For absolute entry I suggest that you establish a
well known (to the user) reference coordinate system, which you set with a
single initActuation(). Then, the user enters manual values that are set
using updateActuation(). As long as drags are "relative" the reference state
of the actuator will remain unchanged (i.e. manual entry and drags are both
using updateActuation() only -- not initActuation() or syncActuation()).

--jon