Safari version: figure 5.4 (record set administration page) is nonexistent. The caption is there, but there's no figure or picture.

Same with figure 5.5 (5.3 and 5.6 are fine).
In section "2.5. Troubleshooting", I don't think you specifically cover issues with running behind a corporate firewall. In my case, I found it was impossible to get anything accomplished within those constraints. I had to disconnect from my VPN for this. Working on this from my office will be even harder.
Oh, duh. I spent so much time figuring out what to set for this, and forgot the http rule.
I'm on section "2.4.2. Creating the website".

I've been able to run the instance and ssh to it. I installed lampserver^ and let that complete. I verified that I have a reasonable index.html in /var/www/html. I can't do this directly from my laptop, I have to ssh to a Linux box we use on the network.

And from the instance shell, I curled to localhost, and I got the expected content. However, when I pasted the public IP address into my laptop browser, or when I tried to curl to the public ip from my Linux box or laptop, I got connection timeouts.

This is within a corporate firewall, but I would expect that if I was able to ssh to the box, I should be able to curl to it. I suppose it's possible apache isn't running (or not running properly), but this is pretty much "out of the box". I did a ps for apache, and I found it running.

I would ask about this on the AWS forum, but I've already posted once today (*grumble*).
In section "2.4.1. Installing the software", it mentions when using apt-get "Don’t leave out the caret (^) character at the end of the command." Anyone reading this who doesn't know about this will mentally file this in the WTF bucket. I've even read about it now and still put it in that bucket, but it still might be useful to provide a short "by the way" about this.
This isn't strictly an "error", so I'll mention it in a separate topic.

In section "2.2. Launching an AWS instance" it briefly talks about the "Add Tags" page. The text makes it sound like a "tag" is a single string value, when they're actually a key and a value. The example mentions giving it a "name", which is a clue to define a tag with a key of "name", but it would be good if this was stated explicitly.
In section "16.1.3. Adding tolerations to pods", I find it curious that the pod toleration requires specifying the effect. I assume it isn't possible to have two taints that vary only by the effect?
This isn't an error in the text, just something that might be better if it was more specific.

In section "14.2.3. Understanding how apps in containers see limits", it mentions the problem known in the Java world that the JVM doesn't properly acknowledge container limits, as opposed to node limits. The text says that "new versions of Java alleviate that problem by taking the configured container limits into account". It would be better if this specifically referred to Java 9. In corporate worlds, it's not that unusual for Java 8 to be "new".
In section "13.4.3. Isolating the network between Kubernetes namespaces", the detail for listing 13.23 refers to "microservice= shopping-cart", but I believe that should be "app=shopping-cart".
Ah, right. The deployment is a stepwise transition from old to new.
In section "11.1.6. Introducing the controllers running in the Controller Manager ", I found the following sentence:

The Deployment controller performs a rollout of a new version each time a Deployment object is modified (if the modification should affect the deployed pods). It does this by creating a ReplicaSet and then appropriately scaling both the old and the new ReplicaSet based...


Is this true? If this is not a misstatement, what would it change in the old ReplicaSet?
Section "6.2.1. Using an emptyDir volume" has a problem that I ran into in earlier sections, and will probably hit many more times. The build of the "fortune" image fails when run behind a proxy.

I see the following:

Get:2 http://archive.ubuntu.com/ubuntu xenial InRelease [14.8 kB]
Err:1 http://security.ubuntu.com/ubuntu xenial-security InRelease
  Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?)


The fix is to configure the image so that apt has proxy information. This could be done by writing a custom apt.conf file to dump into the image, but it's likely easier to just set the http_proxy and https_proxy environment variables in the Dockerfile, making sure they are BEFORE the apt calls, like this:

FROM ubuntu:latest
ENV http_proxy "http://proxyhost:proxyport"
ENV https_proxy "http://proxyhost:proxyport"
RUN apt-get update ; apt-get -y install fortune
ADD fortuneloop.sh /bin/fortuneloop.sh
ENTRYPOINT /bin/fortuneloop.sh
I later remembered that earlier text had mentioned that the unhealthy app simply returned a 500 status code after a certain number of requests. I inspected the code in the github project to confirm this. So, I tried doing the "port-forward" thing so I could hit the app. I did it several times, until is started giving me "I'm not well. Please restart me!". Unfortunately, no one was listening to it smilie . I listed the pods after getting this message, and it never seemed to restart.
So in section "4.1.2. Creating an HTTP-based liveness probe", I first found it curious that the text just says to create the pod without being explicit about exactly how to do that. Up to this point in the book, it's been very explicit what steps need to be taken. I figured this was the point in the book where we're supposed to just know how to do this.

The text provided the yaml for the pod definition, but also said that the image was pushed to docker hub, so I didn't actually have to enter the yaml text. The text itself didn't actually say what the image was, but I saw the image name in the yaml text, so I figured that was it.

I first tried a couple of different ways to create the pod. I first used the model that creates a replication controller. That worked, but I noticed that even after waiting quite a long time, I never saw the pod restart. I then tried deleting that and just creating a deployment (hadn't even tried that yet). That had the same result. It started the pod, but never restarted it.

So, I deleted that and simply tried entering the text of the given "kubia-liveness-probe.yaml" file in the book. I ran "kubectl create -f kubia-liveness-probe.yaml". That gave me this:

error: unable to recognize "kubia-liveness-probe.yaml": no matches for /, Kind=pod


I found a couple of other occurrences of this error on the net, but I didn't see any resolution for it.
That was already set. Doesn't make any difference.

Note that I found the following related issue for minikube, which I've commented on: https://github.com/kubernetes/minikube/issues/2453 .
Hmm, looks like there are more issues related to this. Following the steps in the book, I then created the kubia-http LoadBalancer. When I tried to do "minikube service kubia-http" to get the host:port, I got this:

Error opening service: Could not find finalized endpoint being pointed to by kubia-http: Error validating service: Error getting service kubia-http: Get https://192.168.99.100:8443/api/v1/namespaces/default/services/kubia-http: Proxy Authentication Required


So this looks like it's seeing my https_proxy setting, but not my "no_proxy" setting, or perhaps the format I used isn't working. I found that the documentation for specifying IP addresses in no_proxy is ambiguous, and especially address ranges. I thought it would support CIDR, so I set it to "192.168.0.0/16", but for this piece that didn't appear to work.
LOL. Your last "doubtful" comment was the key. When you told me I needed the "--docker-env" settings, I thought that "minikube stop" would do everything that was required to start over. I just did "minikube delete" and started over, and now the image was successfully downloaded and the container started. However, I also looked at "minikube logs" and grepped for "proxy", and it never found anything that looked like a connection that was going through a proxy.
Hmm, this seemed promising, but it didn't make any difference. If I ran that command, and then did "minikube ssh", what should I see in that environment that would tell me whether or not these variables were set properly or not? I wouldn't expect to see anything in the simple "env" output, because those are intended to be docker-specific.
So I'm following the instructions in chapter 2 to get the toy kubia image running in a pod. I already had a Docker Hub login because of some earlier experiments with pure Docker images.

As described in chapter 2 of the book, I built the toy "kubia" image, and I was able to push it to Docker Hub. I verified this again by logging into Docker Hub and seeing the image.

I'm doing this on Centos7.

I was able to build the image and push it to docker hub without any difficulty.

I then run the following to create the replication controller and pod running my image:

kubectl run kubia --image=davidmichaelkarr/kubia --port=8080 --generator=run/v1


I waited a while for statuses to change, but it never finishes downloading the image, when I describe the pod, I see something like this:

  Normal   Scheduled              24m                 default-scheduler  Successfully assigned kubia-25th5 to minikube
  Normal   SuccessfulMountVolume  24m                 kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-x5nl4"
  Normal   Pulling                22m (x4 over 24m)   kubelet, minikube  pulling image "davidmichaelkarr/kubia"
  Warning  Failed                 22m (x4 over 24m)   kubelet, minikube  Failed to pull image "davidmichaelkarr/kubia": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)


So I then constructed the following command:

curl -v -u 'davidmichaelkarr:**' 'https://registry-1.docker.io/v2/'

Which uses the same password I use for Docker Hub (they should be the same, right?).

This gives me the following:

* About to connect() to proxy *** port 8080 (#0)
*   Trying **.**.**.**...
* Connected to *** (**.**.**.**) port 8080 (#0)
* Establish HTTP proxy tunnel to registry-1.docker.io:443
* Server auth using Basic with user 'davidmichaelkarr'
> CONNECT registry-1.docker.io:443 HTTP/1.1
> Host: registry-1.docker.io:443
> User-Agent: curl/7.29.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 Connection established
<
* Proxy replied OK to CONNECT request
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
*       subject: CN=*.docker.io
*       start date: Aug 02 00:00:00 2017 GMT
*       expire date: Sep 02 12:00:00 2018 GMT
*       common name: *.docker.io
*       issuer: CN=Amazon,OU=Server CA 1B,O=Amazon,C=US
* Server auth using Basic with user 'davidmichaelkarr'
> GET /v2/ HTTP/1.1
> Authorization: Basic ***
> User-Agent: curl/7.29.0
> Host: registry-1.docker.io
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< Www-Authenticate: Bearer realm="https://auth.docker.io/token",service="registry.docker.io"
< Date: Wed, 24 Jan 2018 18:34:39 GMT
< Content-Length: 87
< Strict-Transport-Security: max-age=31536000
<
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
* Connection #0 to host *** left intact


I didn't know what else to try, so I asked about this on StackOverflow: https://stackoverflow.com/questions/48429591/kubectl-cant-connect-to-docker-registry-to-download-image

The answer I got talks about the need to set up a secret that I register in kubectl. I haven't started this yet. Is this just something that was missed in the book, or is there a reason why this might not have been required?
Section 9.2, "Multi-host Docker", Technique 78, "A seamless Docker cluster with Swarm".

This section has the following text and command lines and output:
-----------------------
You can start up your first agent on your current machine as follows:

h1 $ ip addr show eth0 | grep 'inet '
inet 10.194.12.221/20 brd 10.194.15.255 scope global eth0
h1 $ docker run -d swarm join --addr=10.194.12.221:2375 token://$CLUSTER_ID
9bf2db849bac7b33201d6d258187bd14132b74909c72912e5f135b3a4a7f4e51
h1 $ docker run swarm list token://$CLUSTER_ID
10.194.12.221:2375
h1 $ curl https://discovery-stage.hub.docker.com/v1/clusters/$CLUSTER_ID
["10.194.12.221:2375"]
------------------------------------

The "ip addr show eth0" command is apparently intended to get the current IP address. The information here needs to be more general than this. This command line will succeed in only a small number of cases.

First, you're using the old "eth0" convention for interface names. I don't know the entire landscape of this, but the newer convention uses the "predictable network interface names" scheme (described somewhat at https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/). On my CentOS7 laptop, the primary ethernet interface is named "enp0s25".

In addition, it's even more likely these days that the user will be on a laptop connected to the wifi interface, not the ethernet interface (probably not the correct term for that anymore), which on my CentOS7 box is "wlo1". This is the interface that shows my current IP address at the moment.

I would almost conclude that changing this section to describe how to accurately determine the current IP address would result in the section having very little information about Docker, relatively. smilie It might be better to throw up your hands and just tell the reader to determine their current IP address.
Section 5.2, "Traditional configuration management tools with Docker", Technique 46, "Traditional: using make with Docker".

This section has two examples of piping the contents of a tar file into "docker build". Unfortunately, they cannot work, because the "docker build" command line is using the "." (period) option, not the "-" (dash) option, so they will ignore stdin. Both of those command line should replace "." with "-".
Section 2.2, "The Docker daemon", problem "You want to run a Docker container in the background as a service".

The solution says to run the following command:
--------------------------
docker run -d -i -p 1234:1234 --name daemon ubuntu nc -l 1234
----------------------

When I run this (on CentOS7), it fails with:
-------------------
docker: Error response from daemon: Container command 'nc' not found or does not exist..
-------------------

I've tried several other variations of this (wrapping it in "/bin/sh -c", for instance), and I can't get anything to actually work when adding the later telnet attempt.
I failed to resist responding to this.

I too regret the continued reference to "folder" instead of "directory" in modern Linux documentation, as I was using "directories" on Unix before Windows even existed, but let's be realistic. I sincerely doubt there is anyone who is "confused" by this reference and is wondering what OS is being discussed. Any reasonable person can see these are synonyms, in all but an obscure set of use cases.
Section 12.3.2, "Fine-tune scheduling with filters", has the following text in the last paragraph of the section:
------------------
Automatically scaling the number of nodes in a cluster is feasible when only some nodes become unused.
------------------

This should actually read:
------------------
Automatically scaling down the number of nodes in a cluster is feasible when only some nodes become unused.
------------------

As it is conceivable that you could scale UP the number of nodes, and the subject of this paragraph is automatically removing unused nodes, the sentence should specify that we're talking about scaling down.
In section 8.2.2, "File system instructions", right after running the "mailer-live" image, the text says "If you link a watcher to these, you’ll find ...". What does that mean? What is a "watcher" in this context?
Section 8.1, "Packaging Git with a Dockerfile".

Similar to the issue in section 7.1.2, "Preparing packaging for git", you can't install git on a fresh ubuntu container without first updating the cache with "apt-get update", so the initial Dockerfile needs to do this:
--------------------
FROM ubuntu:latest
MAINTAINER "dockerinaction@allingeek.com"
RUN apt-get update
RUN apt-get install -y git
ENTRYPOINT ["git"]
---------------------

If it matters, without that additional line, the image build fails with this:
---------------------------
E: Unable to locate package git
The command '/bin/sh -c apt-get install -y git' returned a non-zero code: 100
-------------------------
Section 7.1.2, "Preparing packaging for Git".

This section says to create a fresh "ubuntu:latest" container, run "/bin/bash" on it, and suggests that I install git by running "apt-get -y install git".

This fails, because a freshly created Ubuntu container needs the apt cache initialized with "apt-get update". This takes several minutes to complete. Once it completes, the described installation of git succeeds.
Chapter 6.

Several command lines between sections 6.1.3 and 6.2.2 that should refer to "docker run" are missing the "run" keyword, so they fail immediately with errors like this:
----------------------
flag provided but not defined: -it
See 'docker --help'.
-------------------

The lines in error are:
* docker -it --rm --device /dev/video0:/dev/video0 ubuntu:latest ls -al /dev
* docker -d -u nobody --name ch6_ipc_producer dockerinaction/ch6_ipc -producer
* docker -d -u nobody --name ch6_ipc_consumer dockerinaction/ch6_ipc -consumer (twice)
* docker -d --name ch6_ipc_producer --ipc host dockerinaction/ch6_ipc -producer
* docker -d --name ch6_ipc_consumer --ipc host dockerinaction/ch6_ipc -consumer
thompson2526@gmail.com wrote:The image name
dockerfile/mariadb
seems to be incorrect


I thought that just removing "dockerfile/" worked for me, but I noticed later that the container exited immediately with the familiar "database is uninitialized and password option is not specified" error.
Removing "dockerfile/" got me past the "not found" error, and I did not see the other reported error about missing env vars. I did notice the following warning however:
---------------------
WARNING: Your kernel does not support swap limit capabilities, memory limited without swap.
--------------------

Otherwise, the container appeared to successfully start up. I haven't used it yet.
thompson2526@gmail.com wrote:This text is repeated twice:
Container B depends on A:
container A’s IP address
is written into container B


I believe the 2nd time (RHS) is meant to show dependence of C on B, correct?


That's figure 5.12, if it matters.
In section 5.7.3, "Environment modifications", the following text has one minor technical error:
-----------------------------
One additional environment variable of the form <ALIAS>_<PORT> will be created and will contain connection information for one of the exposed ports in URL form.
----------------------

This should actually be:
--------
One additional environment variable of the form <ALIAS>_PORT will be created and will contain connection information for one of the exposed ports in URL form.
---------------

Rendering it as "<PORT>" implies this is substituted with a port number. The actual variable just ends with the string "_PORT".
thompson2526@gmail.com wrote:On Ubuntu 15.10, this example produces confusing results:

 > dk  run --rm --hostname barker alpine:latest nslookup barker   
nslookup: can't resolve '(null)': Name does not resolve

Name:      barker
Address 1: 172.17.0.2 barker

Is this correct?


I got the same, and I did a little research. It appears that Alpine Linux doesn't know what nameserver to use unless you tell it, so running "nslookup barker localhost" gets better output.
I'm reading DiA on Safari, so I don't have page numbers.

In section 2.5.1., "Read-only file systems", it provides the command line to run the "wordpress:4" image, and then it says to check that the container is running.

Right after running the image, I do "docker ps", and the new container is not running.

I then tried "docker ps -a", and it shows me something like this:
---------------------
bfa057b3d0f2 wordpress:4 "/entrypoint.sh apach" 16 minutes ago Exited (1) 11 seconds ago wp
---------------------

In other words, it appears that it exits immediately after it starts.

So, I then did "docker logs wp", and it says this:
-------------------
error: missing required WORDPRESS_DB_PASSWORD environment variable
Did you forget to -e WORDPRESS_DB_PASSWORD=... ?

(Also of interest might be WORDPRESS_DB_USER and WORDPRESS_DB_NAME.)
-----------------------

Anything I can do here?
Reading through chapter 19, "Domain-Specific Languages", it appears you're trying to cover all the ways that someone could "cheat" (deliberately or accidentally) in their script to "get out of the sandbox". You cover what you can do with "import restrictions" seemingly very thoroughly. However, nowhere do you mention that it's very easy to reference a class without importing it, by referencing the fully-qualified class name. This would likely be dealt with in a SecurityManager, which you do talk about, but it would be useful to cover this case.
Section 17.7, "Testing with the Spock framework", has a curious statement:
Also note that you can use spaces in the method name, a feature provided by Spock, which is implemented using an AST transformation.


There's no doubt that this is not "a feature provided by Spock", as you can do this in any old Groovy script or class. I have no idea why an AST transformation might be involved with this.
In section 11.2.1, "NodeBuilder in action: a closer look at builder code", in the following footnote:
Because invoices is the root node, the method name makes no difference in how we use the node in the example. Listing 11.4 also works if you replace builder.invoice with builder.whatever.

Change "builder.invoice" to "builder.invoices".
Following listing 10.43, "Immunity to monkey patching", there is the following text, with "(n)" to indicate what numbered marker the text refers to:
The first case(2) isn’t statically compiled, while in the other(2) we’re using @CompileStatic.


The marker reference for "The first case" is "2", but it should be "1".
In section 10.2.6, "Type checking closures", there is the following text, with my bolding:
Another alternative that the type checker can possibly use to determine argument type information is API metadata. Groovy has several annotations that add metadata to the API. Let’s look at those next.

@ClosureParams

Another alternative that the type checker can use to determine argument type information is API metadata, if it’s available. Groovy provides the @ClosureParams annotation as used in the following listing to give type hints for the expected parameter types of the validation closure.


Not an error, but I would imagine almost duplicating that same phrase so close to each other probably isn't optimal.
Section 10.2.1, "Finding typos", demonstrates that the Groovy compiler doesn't detect a typo, unless you add the @TypeChecked annotation.

It might be worthwhile to mention that IDE support can mitigate the lack of a "@TypeChecked" annotation. For instance, in Eclipse, if you don't have the @TypeChecked annotation, it doesn't report an error for this test case, but it does underline the questionable variable reference. This might be enough to alert the developer of a misspelling.
Figure 9.6, "Classes involved with the @Main local AST transformation", looks like this:

image

Assuming that a box "foo" with an arrow pointing to box "bar" with a notation "verb" on the arrow means "foo verb bar", then the arrow labeled "Implements" pointing from the box labeled "ASTTransformation" to the box labeled "MainTransformation" should be pointing in the other direction, as "ASTTransformation" is the interface being implemented by "MainTransformation".
In section 9.2.6, "Scripting support", there is the following text before the code sample describing the @ConditionalInterrupt annotation, with one statement bolded:
The way you specify the conditional interrupt is within a closure annotation parameter. You can reference any variable that’s in scope within this closure. For scripts, general script variables are in scope, and for classes, instance fields are in scope. The following listing shows a script that executes some work 1,000 times or until 10 exceptions have been thrown, whichever comes sooner.

Following this is listing 9.25, titled "Using @ConditionalInterrupt to set an automatic error threshold":
import groovy.transform.ConditionalInterrupt

@ConditionalInterrupt({ count <= 5 })
class BlastOff3 {
    def log = []
    def count = 10

    def countdown() {
        while (count != 0) {
            log << count
            count--
        }
        log << 'ignition'
    }
}

def b = new BlastOff3()
try {
    b.countdown()
} catch (InterruptedException ignore) {
    b.log << 'aborted'
}
assert b.log.join(' ') == '10 9 8 7 6 aborted'

The problem is, the description in the text does not describe what this code does. It causes the exception to be thrown when count reaches 5 or lower.


In section 4.2.2, "Using list operators", there is the following callout (with one phrase bolded):
Avoid negative indexes with half-exclusive ranges

Ranges in List’s subscript operator are IntRanges. Exclusive IntRanges are mapped to inclusive ones at construction time, before the subscript operator comes into play and can map negative indexes to positive ones. This can lead to surprises when mixing positive left and negative right bounds with exclusiveness; for example, IntRange (0..<-2) gets mapped to (0..-1), such that list[0..<-2] is effectively list[0..-1].

Although this is stable and works predictably, it may be confusing for the readers of your code, who may expect it to work like list[0..-3]. For this reason, this situation should be avoided for the sake of clarity.


This seems to be saying that list[0..<-2] is the same as list[0..-1]. I tested this with a simple example:
def lst = [1, 3, 5, 7, 9]
println lst[0..<-2]
println lst[0..-1]

This prints this:
[1, 3, 5]
[1, 3, 5, 7, 9]


Is this callout trying to make some other point besides what I've read here?
Never mind. I figured this out. Unary plus doesn't really do anything, but unary minus will return a new ArrayList with all of the entries negated.
In section 3.3.1. Overview of overridable operators, in the table of overriddable operators, it lists the unary minus and plus operators, and in the list of types that the operator "works with", along with the obvious "Number", it lists "ArrayList". What would unary minus or plus do with an ArrayList?
Not an error, but section 2.3.7, "Numbers are objects" talks about the fact that you can use numeric operators on numbers, but it never actually does that. It demonstrates using numeric operators on VARIABLES which hold numbers, but never shows an example directly using numeric operators on numbers, like "3.plus(3)".
You call it a "folder"?

In section 1.1.3, "Power in your code: a feature-rich language", the following example is cited:

println( [String, List, File]*.package*.name )

After the example it says:
... to emphasize that the access to package and name is spread over the list ...

This produces the following output:
[java.lang, java.util, java.io]

The funny thing is, the following produces the exact same output:
println([String, List, File]*.package.name)

It doesn't appear to be necessary to "spread" over package AND name, just package.
Kostis Kapelonis wrote:Hello dkarr

The book is aimed at Java developers that have zero Groovy experience. A whole chapter is devoted to teaching Groovy just enough for Spock tests.
It seems to me that people who are writing Gradle plugins are not Groovy beginners (correct me If I am wrong). The book also does not use or require Gradle.

The last Rebel Labs developer survey found that Gradle was the one technology that most developers were interested in learning. From what I could see, this number was even higher than Groovy. There will be developers whose first involvement with Groovy is from trying to learn Gradle and write custom Gradle plugins, and the tests for those plugins.

I'm not suggesting that the book should require Gradle, but I'm telling you that a large percentage of the people thinking about this book will be new to many of these technologies, but they will likely be initially driven there because of their desire to learn Gradle.
Kostis Kapelonis wrote:
I am thinking whether I need to include a chapter on how to test Groovy code (instead of Java) with Spock.

So to sum up the book is much more generic and has a wider audience than writing Gradle plugins. The TOC is still work in progress (especially
the last chapters are very fuzzy at the moment) but introducing Gradle as a required knowledge would certainly limit the book audience (something that
I am against at).

Can you point me to some tutorials on Gradle plugin testing that you like?Is there something fancy that they do with Spock that needs more explanation?

Have you also looked at "Gradle in Action" by Manning? I think it contains some testing for Gradle plugins (can't remember if it is with Spock or JUnit)

Kostis

Sure, I looked at GiA, but it had very rudimentary coverage in this area.
I haven't read any of the available sections yet, but I looked at the TOC, and I have some concerns.

I have a strong feeling that many people considering this book will be looking at it for writing tests for Gradle plugins. There may be development teams whose primary uses of Groovy are for their custom Gradle plugins and (hopefully) their Spock tests.

Make sure that you have several examples of different kinds of Gradle plugins, and effective tests for those plugins.
By the way, I noticed several examples in the book of code like this:

----------------
StringBuilder sb = new StringBuilder();

// movie table
sb.append("CREATE TABLE " + MovieTable.TABLE_NAME + " (");
sb.append(BaseColumns._ID + " INTEGER PRIMARY KEY, ");
sb.append(MovieColumns.HOMEPAGE + " TEXT, ");
sb.append(MovieColumns.NAME + " TEXT UNIQUE NOT NULL, "); // movie names aren't unique, but for simplification we constrain
sb.append(MovieColumns.RATING + " INTEGER, ");
sb.append(MovieColumns.TAGLINE + " TEXT, ");
sb.append(MovieColumns.THUMB_URL + " TEXT, ");
sb.append(MovieColumns.IMAGE_URL + " TEXT, ");
sb.append(MovieColumns.TRAILER + " TEXT, ");
sb.append(MovieColumns.URL + " TEXT, ");
sb.append(MovieColumns.YEAR + " INTEGER");
sb.append(");");
db.execSQL(sb.toString());
-------------

It is better to do this:
-------------------
String sql =
"CREATE TABLE " + MovieTable.TABLE_NAME + " (" +
BaseColumns._ID + " INTEGER PRIMARY KEY, " +
MovieColumns.HOMEPAGE + " TEXT, " +
MovieColumns.NAME + " TEXT UNIQUE NOT NULL, " + // movie names aren't unique, but for simplification we constrain
MovieColumns.RATING + " INTEGER, " +
MovieColumns.TAGLINE + " TEXT, " +
MovieColumns.THUMB_URL + " TEXT, " +
MovieColumns.IMAGE_URL + " TEXT, " +
MovieColumns.TRAILER + " TEXT, " +
MovieColumns.URL + " TEXT, " +
MovieColumns.YEAR + " INTEGER" +
");";
db.execSQL(sql);
-------------------

Besides the fact that it's more concise, it's a common misconception that "+" for strings always does string concatenation. In fact, inside a single expression it implicitly constructs a StringBuilder to produce the subexpression.

In addition, in this case all of those variable references are inlined, and when two inlined strings are concatenated together, the compiler produces a single constant. Adding all this together, the entire "sql" string is defined completely at compile time, resulting in no StringBuilder or string concatenation at all.

It's handy to use the Eclipse ByteCode Outline plugin to visualize this (http://andrei.gmxhome.de/bytecode/).
Page 230, listing 9.4. The "taglib" directive specifies a prefix of "s", but the reference in the body uses the prefix "spring".
p. 324, the paragraph that starts with "The big downside ..." appears twice on this page. The second instance should be removed.
Chapter 3, section "Qualifying Ambiguous Dependencies" describes the "@Qualifier" annotation. I believe the information provided here about how the "@Qualifier" annotation works, when not in the context of "qualifier" attributes, is incorrect.

The section has the following text:
----------------------
For example, to ensure that Spring selects a guitar for the eddie bean to play, even if there are other beans that could be wired into the instrument property, you can use @Qualifier to specify a bean named guitar:

@Autowired
@Qualifier("guitar")
private Instrument instrument;


As shown here, the @Qualifier annotation will try to wire in a bean whose ID matches guitar.
-----------

This says that "@Qualifier" will look for either "a bean named guitar" or "a bean whose ID matches guitar". In reality, it doesn't do either of these. The "@Qualifier" annotation only looks at the "qualifier" attribute of beans. It doesn't look at the "name" or the "id" properties.

The only way that I'm aware of to autowire a bean whose id is equal to "guitar" is to use JSR250's "@Resource" annotation.
p. 304, first paragraph in section 11.4.6: Change:

... that gave us the resulting resource wrapped in a RequestEntity ...

To:

... that gave us the resulting resource wrapped in a ResponseEntity ...
P. 302, paragraph before section titled "RECEIVING OBJECT RESPONSES FROM POST REQUESTS":

Change:

The other method, getForLocation(), is unique for POST requests.

to:

The other method, postForLocation(), is unique for POST requests.
At the top of page 175, just before the start of the sub-section titled "RESOLVING INTERNAL VIEWS" is a reference to the "InternalResolverViewResolver" class. That should be "InternalResourceViewResolver".
Concerning chapter 11, the REST chapter, it amazes me that this book continues the pattern I see in the Spring documentation, which is to completely avoid mentioning that along with the JAX-WS specification, there is also a JAX-RS specification, and at least two implementations of it that integrate with Spring, at least one of which does it very well, being the Apache CXF framework (the other being the Jersey reference implementation), which you did mention earlier in the context of JAX-WS.

Personally, I'm very much "Pro Spring" all the way, but this is one area that I never use because CXF does it so well. I continue to be puzzled when I see Spring docs and Spring books simply avoid mentioning that there's another way to do this.
p. 361. In the following sentence:

"As with the cache attribute, you’ll need to set the lookup-on-startup attribute when
setting lookup-on-startup to false."

Change "... the lookup-on-startup attribute ..." to "... the proxy-interface attribute ...".
In section 8.2, "Enterprise use cases for interception", there is a subsection on defining transactional methods with Guice, but there's not a single mention of how this is done in Spring. This is a very important use case in Spring, so I'm surprised you didn't mention it.
My gosh, when was this book written? Section 12.1.1, "Presentation tier" references several books about web frameworks, listing "WebWork in Action" (September 2005) and "Struts in Action" (2002). BOTH of these books are obsolete, as Struts 2 replaces both Struts 1 and WebWork. It's also very odd to call JSF the "heir apparent" to Struts, as they solve very different problems, and Struts 2 is certainly not going away.
In Table 11.2, "A standard archive may load classes either packaged inside it or from any other archives it is dependent on", the entry for the EAR module and entry 1 in the "Code Sources" column has the text:

All JARs in the /lib directory of the EAR

It should be:

All JARs in the APP-INF/lib directory of the EAR
In section 10.3.9, "Joining entities", the first example of INNER JOIN is odd. Here is the code sample:

SELECT u
FROM User u INNER JOIN u.Category c
WHERE u.userId LIKE ?1

This doesn't seem like a very useful example of an inner join, as nothing else in the query references the joined table. If we formed a new query by removing "INNER JOIN u.Category c", would the results be any different in any situation?
The only choices for CascadeType on a relationship are MERGE, PERSIST, REFRESH, REMOVE, and ALL. I would guess that if the entity on the other end of the relationship could have relationships with other entities, it's possible you might want to avoid cascading removes, so that eliminates REMOVE and ALL from the choices. However, is it reasonable at this point to want to cascade all the others? It appears this isn't a choice. You can only take one of these choices. I'm wondering about the implications of this.
Ok, let's say I first start with the assumption I want all the cascading types, of PERSIST, MERGE, REFRESH, and REMOVE. Then, I decide that I don't want REMOVE to cascade. Now, how do I automatically support all the others, except for REMOVE? I can't specify more than one (or can I?), so it looks like I can only specify one of them, and then implement the others manually.
In section 9.1.4, "Using the EntityManager in ActionBazaar", the "undoItemChanges()" method is supposed to drop any changes to the entity in memory and reload it from the database. The contents of this method are the following:

public Item undoItemChanges(Item item) {
entityManager.refresh(entityManager.merge(item));
return item;
}

I'm a little confused by what I see here, if that's the intent of the method. Won't calling "merge(item)" store any pending changes to the object to the database, before returning the object to be sent to the "refresh()" method, which will just reread what was just stored to the database?
In section 8.3.1, "Mapping one-to-one relationships", and Listing 8.8, "Mapping a one-to-one relationship using @PrimaryKeyJoinColumn", the example has the "USER_ID" column being referenced twice, once in the "@Id" annotation, and once in the "@PrimaryKeyJoinColumn". If I understand the semantics of this correctly, I would think the latter reference is redundant. Would it be treated as such, so that if I left off that column reference, would it know to use the same column referenced in the @Id annotation?

In addition, if I instead divided by configuration between the class and the XML config, so I defined all the table and column names in my XML, would I have to have a duplicate column name reference in the XML for this structure?
Section 7.3.2, "@OneToMany and @ManyToOne" has the following statement:

"For bidirectional one-to-many relationships, ManyToOne is always the owning side of the relationship."

I (and the book) could use some explanation and justification for this statement.
Just a comment for discussion, but I was surprised in reading how application exceptions work, in section 6.2.5, "Transaction and exception handling". It appears that in order to define "application exceptions" that implicitly cause rollbacks, you have to own the exception class. If the EJB is using code from a library, you can't define the exceptions those library methods throws as "application exceptions". I would think the use they've defined for "ApplicationException" would be uncommon, compared to handling exceptions defined in modules you don't own.
AOP is not as obscure as you might think. Anyone somewhat experienced with Spring will see the similarities, and they will see that what EJB 3 provides is a very simple version of AOP, compared to Spring. Many people looking at EJB 3 will also be looking at Spring.
Section 5.3.4, "Implementing business interceptors", could use some discussion about what "AroundInvoke" would typically mean in an AOP environment. The example in the book doesn't accurately represent the "around" notion, just the "before" notion. A true "around" interceptor would execute some code before the wrapped method, call the wrapped method, and then execute some code afterwards. The example could implement this by simply calling the "proceed()" method and storing the result, then printing "Exiting method ...", then returning the saved object from the "proceed()" call.
To self-answer this a bit, section 5.2.3, "Looking up resources and EJBs", implies to me that without the @Resource annotation at the class level, the EJB proxy would not have the binding of the "local" JNDI path of the resource into the global path in the Environment Naming Context. Did I state that correctly?
Concerning section 5.2.1, "Resource injection using @Resource" and subsection "Using @Resource at the class level", I don't understand the implications of what is being said here. It defines a class-level Resource annotation for a DataSource, and then shows explicit lookup code for that DataSource, either in the annotated class or a helper class. It looks to me like the lookup code does not require the presence of the Resource annotation. What does the class-level Resource annotation actually do, if anything?

Note that the lookup reference of "java:comp/env/jdbc/ActionBazaarDB" in the example probably should be "java:comp/env/jdbc/actionBazaarDB".
Section 3.4.1 "Using the @EJB annotation" reminds me of the irony of the fact that injection of "@EJB" annotations into POJOs isn't defined in the spec (I would guess it's not obvious how that should work, at the specification level). Considering that hardly anyone ever writes Servlets anymore, because all the web frameworks use POJOs, the only defined injection you get will be to JSF backing beans. The book says that "some application servers will support injection of EJB references into POJOs as a vendor-specific extension". What's the magnitude of "some" for this? What do the major ones do?
In Section 3.3.4 "Stateful bean lifecycle callbacks", the book indicates that if no "@Remove"-marked method is present, "the client would have no way of telling the container when a session should be ended". It implies that there is no way for the client to remove the SFSB manually. Is that true?
I can see some advantages of using SFSBs in large volume, long session systems, as opposed to storing step data in the web session. You can configure a system to passivate SFSBs before you expire them, which gives them a little longer lifetime, while somewhat reducing the memory load until they are activated again.

However, what are reasonable strategies for dealing with when SFSBs timeout, but the session has not timed out, and the user wakes up again? This is a set of edge cases that would require some thought.
I haven't cracked open the (virtual) book yet, but before I start, I have a question about an issue that's been bothering me about the whole concept of defining everything about an EJB inside the class.

I believe using annotations to define some aspects of the bean is useful, but I don't like the idea of specifying things like a table name or a column name inside the class, because I feel that is a concern that should not be coupled to the class.

I've heard you can have a mix of annotations and XML descriptor. Is it practical to define the parameters that are specific to the database instance (table name, column name, etc.) in the XML descriptor, and define everything else inside the class? Are there any difficulties with using that approach?
Sounds good.
Could you clarify this a bit? Are you actually saying that the JPQL in this example is NOT invalid?
Listing 12.16 shows another example of using invokeMethod to proxy or "pretend" method calls. What I don't understand is why it uses that mechanism here, instead of just defining instance methods "newPage", "destroyPage", and "updateBody"? The invocation process doesn't do anything special, it just invokes the named closure.
I entered listing 8.5 into the Groovy console and it almost worked, but the resulting timezone was PST (my local tz), not CET, as specified in the listing. Setting "user.timezone" seemed to have no effect (I tried with "GMT" also).

I'm using Groovy 1.6RC1 with JDK 1.6.0_11 on *gasp* Windows 2000.

As all of the example code is executed before printing, I assume the test was run in the CET timezone (wherever that is), or there's something wonky about my JDK/Groovy/OS combination.
To be a little more clear about my concern, consider the following class (I believe this is legal Groovy syntax):

Class Foo
{
private int foo1;
private int foo2;

Foo(int foo1, int foo2)
{
this.foo1 = foo1;
this.foo2 = foo2;
}
Foo(Map args)
{
this.foo1 = args.foo1;
this.foo2 = args.foo2;
}
}

Then:

Foo foo = new Foo(foo1:1, foo2:2);

Isn't this legal? The text of the section implies that it is not.
Section 7.1.4 "Constructors" and subsection "Named Parameters" makes me wonder whether I can use named parameters in a constructor if I have an overloaded constructor that takes a single Map parameter. This is how it works with normal methods, but the text says that once you add an explicit constructor, named parameters are no longer available. If this is not the case, then the section should be a little more clear on this.
Section 4.4.1 and the paragraph before it fails to mention that this is one area where Groovy can't do something that Java can (as far as I can tell). Although it may be said that modifying a collection while iterating through it is not a good idea, it's certainly possible to do this in Java, as long as you use the Iterator methods, and not the collection methods. Using "iterator.remove()" in a loop is perfectly legal in Java, and will not throw a ConcurrentModification error.

The Javadoc for "Iterator.remove()" implies this: http://java.sun.com/javase/6/docs/api/java/util/Iterator.html#remove()