The Author Online Book Forums are Moving

The Author Online Book Forums will soon redirect to Manning's liveBook and liveVideo. All book forum content will migrate to liveBook's discussion forum and all video forum content will migrate to liveVideo. Log in to liveBook or liveVideo with your Manning credentials to join the discussion!

Thank you for your engagement in the AoF over the years! We look forward to offering you a more enhanced forum experience.

hettlage (136) [Avatar] Offline
#1
I feel that some points used in exercises 2.1 and 2.2 might be worth being explained in the main text:

  • Multiplying a tensor A by a scalar s returns the tensor you get when multiplying all components of A by s.

  • Subtracting a scalar s from a tensor A returns the tensor you get when subtracting s from all components of A.


  • Exercise 2.2 also raises some questions:

    1. Is there a reason why you use
    tf.neg(A)
    rather than just -A?

    2. What should you use for x? (Something like
    x = tf.convert_to_tensor(np.linspace(...))
    perhaps?)

    3. Why do you use
    tf.pow(sigma, 2.0)
    rather than
    sigma**2.0
    ?

    4. Shouldn't you use
    mean=0.0
    and
    sigma=1.0
    ?

    Finally, I think that referring to Figure 2.3 makes exercise 2.2 harder rather than easier.

    Nishant Shukla (52) [Avatar] Offline
    #2
    You questions about exercise 2.2 make it clear that it can be explained better. I'll fix it up to address each of your points. Thank you for the detailed suggestions and questions!
    aakoch (2) [Avatar] Offline
    #3
    I'm reading v9 and I am totally lost with exercise 2.2. Am I really expected to know how to do that? The exercise before was negating a value and then this?