1. 07 6月, 2018 4 次提交
  2. 06 6月, 2018 5 次提交
  3. 05 6月, 2018 3 次提交
    • D
      FIX: Tensorboard callback only supports logging Embeddings layer weights (#7766) · ce56322a
      David Schwertfeger 提交于
      * Embed layer-outputs rather than layer-weights in TensorBoard callback
      
      * Update docstring and allow multiple inputs
      
      * Fix tests
      
      * Renaming
      
      * Set learning phase
      
      * Compute embeddings in batches
      
      * Pass embedding data explicitly
      
      * Actually process embeddings in batches
      
      * Allow multiple inputs and validate input data
      
      * Add example
      
      * Delete utils.py
      
      * Revert uncorrectly resolved merge-conflict
      
      * Minor renaming
      
      * Add comment clarifying the design choice
      ce56322a
    • T
      Add an advanced activation layer for ReLU (#10322) · b2176482
      Tommi Koivisto 提交于
      The max_value argument can not be used in a layer, except
      custom layer or Lambda. Hence, similarly to LeakyReLU or
      for example Softmax, this PR adds a layer for ReLU,
      enabling also a capped ReLU to be used.
      b2176482
    • T
      Reduce tests for applications (#10346) · 1365ed5d
      Taehoon Lee 提交于
      * Reduce tests for applications
      
      * Make selection over all models random
      1365ed5d
  4. 03 6月, 2018 2 次提交
  5. 02 6月, 2018 2 次提交
  6. 01 6月, 2018 1 次提交
  7. 30 5月, 2018 3 次提交
  8. 29 5月, 2018 1 次提交
  9. 28 5月, 2018 1 次提交
  10. 26 5月, 2018 3 次提交
    • B
      Handle capitalised extensions in list_pictures (#10220) · 794f8143
      Botty Dimanov 提交于
      #10219
      794f8143
    • W
      Non training Batch Norm operator has bad performance for it running into... · 84aa7b5f
      Wang, Zhiming 提交于
      Non training Batch Norm operator has bad performance for it running into tensorflow's non fused batch norm API (#10207)
      
      * When use tensorflow as backend, let batch norm run into fused batch norm as much as possible, which has better performance.
      
      fix issue: http://github.com/keras-team/keras/issues/10058
      
      * In Tensorflow backend, let batch norm call to FusedBatchNorm only NHWC format, also gamma and beta are not None.
      
      Test result:
      test env: with Tensorflow(commit a543d9471047ca3f6881c87105fcbe2cdff9207d Date:   Thu May 10 17:43:30 2018, local build), python3.4, centos7.4
      test cases:
        "pytest  ./tests/keras/layers/normalization_test.py"  <all passed>
        "pytest  ./tests"      <keep same result as without this commit's modification on BN>
      
      * fix code sytle.
      
      * 1. Add axis parameter in backend's batch_normalization functions.
      2. Refine the batch_normalization function in tensorflow backend, Let's it call to fused batch norm as much as possible.
      
      Thanks the coments from fchollet.
      
      * Trigger
      
      * 1. add default value -1 for parameter axis in batch_normalization function in backend.
      2. fix some code style.
      Thanks the comments from fchollet.
      84aa7b5f
    • S
      Adds to and alphabetizes documentation of Layer base class. (#10282) · e6d21795
      Stanley Bileschi 提交于
      * Alphabetizes and adds to layers doc.
      
      * Responding to @cais comments
      
      * fix spacing.  Remove in(out)bound_nodes
      e6d21795
  11. 25 5月, 2018 3 次提交
  12. 24 5月, 2018 3 次提交
  13. 23 5月, 2018 2 次提交
  14. 22 5月, 2018 2 次提交
  15. 19 5月, 2018 1 次提交
    • G
      In-place split to avoid inter-device duplication (#10230) · bf1378f3
      ghostplant 提交于
      New Benchmark by in-place split:
      
      >> keras.application.Resnet50 224x224x3 (NCWH; NVidia Tesla P100 x 4)
       input_shape = 3x224x224, batch_size =  96 x 4: 392(images/sec) => 417(images/sec)
       input_shape = 3x299x299, batch_size =  64 x 4: 229(images/sec) => 244(images/sec)
       input_shape = 3x224x224, batch_size =   8 x 4: 148(images/sec) => 163(images/sec)
      
      >> keras.application.InceptionV3 (NCWH; NVidia Tesla P100 x 4)
       input_shape = 3x224x224, batch_size = 128 x 4: 488(images/sec) => 526(images/sec)
       input_shape = 3x299x299, batch_size =  96 x 4: 270(images/sec) => 294(images/sec)
       input_shape = 3x224x224, batch_size =   8 x 4: 146(images/sec) => 158(images/sec)
      Signed-off-by: NCUI Wei <ghostplant@qq.com>
      bf1378f3
  16. 18 5月, 2018 4 次提交