1. 22 8月, 2019 4 次提交
  2. 26 7月, 2019 1 次提交
  3. 24 7月, 2019 1 次提交
    • N
      For better performance (#13144) · efe72ef4
      Neutron3529 提交于
      move self.l1 and self.l2 outside K.sum may increase the performance of calculating the norm.
      In deep learning, the norm should not be very large (a large norm will make loss function useless), so move self.l1 and self.l2 outside K.sum will not cause overflow problem
      efe72ef4
  4. 11 7月, 2019 1 次提交
  5. 10 7月, 2019 1 次提交
  6. 07 7月, 2019 1 次提交
  7. 04 7月, 2019 1 次提交
  8. 03 7月, 2019 6 次提交
  9. 28 6月, 2019 1 次提交
  10. 25 6月, 2019 2 次提交
  11. 24 6月, 2019 1 次提交
  12. 18 6月, 2019 1 次提交
  13. 08 6月, 2019 3 次提交
  14. 05 6月, 2019 1 次提交
  15. 02 6月, 2019 1 次提交
  16. 30 5月, 2019 1 次提交
  17. 29 5月, 2019 3 次提交
    • F
      Fix CNTK test. · eab1b5bc
      François Chollet 提交于
      eab1b5bc
    • F
      Revert "Sync Keras optimizer with tf.keras optimizer (#12841)" (#12888) · 665b0076
      François Chollet 提交于
      This reverts commit 08f6bdeb.
      665b0076
    • T
      Sync Keras optimizer with tf.keras optimizer (#12841) · 08f6bdeb
      tanzhenyu 提交于
      * Sync keras optimizers with tf.keras optimizers.
      
      Changes:
      1) epsilon has been removed from argument list, using default instead
         for future support of mixed precision. passing epsilon is still
         supported.
      2) decay has been removed from argument list, for future support of
         learning rate decay objects. passing decay is still supported.
      3) all lr has been changed to learning_rate. passing lr is still
         supported.
      4) add initial_accumulator_value to Adagrad.
      5) add momentum and centered to RMSprop.
      6) Adam optimize won't have vhat if amsgrad is not present. set_weights
         is override to maintain backward compatibility.
      7) Adadelta adds iterations to its weight list. set_weights is override
         to maintain backward compatibility.
      8) Adagrad adds iterations to its weight list. set_weights is override
         to maintain backward compatibility.
      9) RMSprop adds iterations to its weight list. set_weights is override
         to maintain backward compatibility.
      10) Nadam changes default learning rate from 0.002 to 0.001
      11) Adamax changes default learning rate from 0.002 to 0.001
      12) Adadelta changes default learning rate from 1.0 to 0.001
      13) Adagrad changes default learning rate from 0.01 to 0.001
      
      * Slightly refactor SGD.
      
      * Remove blank spaces.
      
      * Fix some pep8 errors.
      
      * Fix some error.
      
      * Restore learning rate to previous default.
      
      For Adadelta, Adamax, Adagrad
      
      * Adjust some more lr.
      
      * Nit
      08f6bdeb
  18. 25 5月, 2019 1 次提交
  19. 24 5月, 2019 4 次提交
  20. 22 5月, 2019 1 次提交
  21. 21 5月, 2019 1 次提交
  22. 06 5月, 2019 1 次提交
  23. 03 5月, 2019 1 次提交
  24. 02 5月, 2019 1 次提交