General methods
Base methods dispatched
Base.setproperty!
— FunctionBase.setproperty!(t::Tensor, prop::Symbol, val)
If the data property is modified, the gradient is set to 0
Base.size
— FunctionBase.size(t::Tensor)
Return the size of the tensor's data
Base.ndims
— FunctionBase.ndims(t::Tensor{Union{Float64,Int64,AbstractArray}})
Return the number of dimensions of the tensor's data
Base.length
— FunctionBase.length(t::Tensor{Union{Float64,Int64,AbstractArray}})
Return the length of the tensor's data
Base.length(d::DataLoader)
The length of a DataLoader is it's number of batches.
Base.iterate
— FunctionBase.iterate(t::Tensor{Union{Float64,Int64,AbstractArray}})
Base.iterate(t::Tensor{Union{Float64,Int64,AbstractArray}}, state)
Iterate on the tensor's data
Base.iterate(d::DataLoader, state=1)
Iterate over the DataLoader. For the first iteration, the list of indices is shuffled (if required). Then, return a subpart of the dataset according to the batch size with this size : (dataSize...,batchSize)
For example, if the input data contains two values for a batch size of 4, this matrix will be returned :
[
1 0 1 1
0 1 0 1
]
Base.show
— FunctionBase.show(io::IO, t::Tensor)
String representation of a tensor
Base.show(io::IO, d::Dense)
String representation of a Dense layer
Base.show(io::IO, s::Sequential)
String representation of a Sequentail
Base.show(io::IO, d::Flatten)
String representation of a Flatten layer
Methods for the gradient
NNJulia.Autodiff.zero_grad!
— Functionzero_grad!(t::Tensor)
Set the gradient with respect to this tensor to 0
zero_grad!(d::Dense)
Set the gradient with respect to the weights and biases of this layer to 0
zero_grad!(s::Sequential)
Set the gradient of every tensors contained in the layers in the sequential to 0
zero_grad!(d::Flatten)
Does nothing for a flatten layer