Fixed error in grad_chooser (for e.g., max) when dtype is not numpy float64#199
Open
kswersky wants to merge 1 commit intoHIPS:masterfrom
Open
Fixed error in grad_chooser (for e.g., max) when dtype is not numpy float64#199kswersky wants to merge 1 commit intoHIPS:masterfrom
kswersky wants to merge 1 commit intoHIPS:masterfrom
Conversation
Contributor
|
FYI, I have taken some autograd gradients for a side project, and found float64 casting performance bugs scattered around. |
Collaborator
|
@alexbw is this when you're using float32 arrays? It should be straightforward to add float32 tests. I think adding something to |
Contributor
|
Yes, when using float32s. Sorry I don't anything more specific than that, I
no longer depend on autograd grads, so am not spending more bandwidth on
testing them right now. But, if you're interested, there's a possible 2x
speed gain by avoiding the casts. Sorry to be unhelpful, but I figured
having the info out there would be more useful than silence.
…On Mon, Mar 20, 2017 at 6:00 PM Jamie Townsend ***@***.***> wrote:
@alexbw <https://github.com/alexbw> is this when you're using float32
arrays?
It should be straightforward to add float32 tests. I think adding
something to check_fun_and_grads/check_grads should cover almost
everything.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#199 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAJ4j29e-4XUqnGZeAMQzxjHxa-gFO4mks5rnvb-gaJpZM4MdhXB>
.
|
Collaborator
|
Yeah it's even better than 2x for some things. I made this pr a little while ago but I'll make another one to add testing for float32 dtype support accross all the primitives. |
86820fd to
2f6cc22
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Grads of e.g., sum(anp.max(x, 1)) fail when x is of dtype float32. The issue is that numpy implicitly casts to float64 when dividing a float32 array by an int64 array, which happens in grad_chooser. This fixes the issue by casting appropriately in grad_chooser.