创建NDArray
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
| from mxnet import nd
x = nd.arange(12) print(x)
print(x.shape)
x = x.reshape((3,4))
print(x)
print(nd.zeros((2,3,4)))
print(nd.ones((2,3,4)))
y = nd.array([[1,2,3,4], [2,3,5,6], [2,5,7,8]]) print(y)
z = nd.random.normal(0, 1, shape=(3,4)) print(z)
|
运算
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
| from mxnet import nd
x = nd.arange(12) x = x.reshape((3,4)) y = nd.array([[1,2,3,4], [2,3,5,6], [2,5,7,8]])
print(x+y)
print(x-y)
print(x*y)
print(x/y)
print(y.exp())
print(nd.dot(x, y.T))
print(nd.concat(x, y, dim=0))
print(nd.concat(x, y, dim=1))
print(x == y)
print(x.sum())
|
广播机制
当对形状不同的两个NDArray按元素操作后,可能会触发广播(broadcasting)机制:先适当复制元素使得两个NDArray形状相同后再按元素操作。
1 2 3 4 5 6 7 8 9
| from mxnet import nd
a = nd.arange(3).reshape((3,1)) b = nd.arange(2).reshape((2,1))
print(a,b)
print(a + b)
|
索引
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
| from mxnet import nd
x = nd.arange(12).reshape((3,4))
print(x)
print(x[1:3])
x[1,2] = 12 print(x)
x[1:2, :] = 24 print(x)
|
运算的内存开销
对每个操作新开内存来存储运算结果。即使像y = x + 这样的运算,都会新建内存,然后y指向新的内存。可以使用python自带的id函数来证明这一点:如果两个实例id一致,那么他们所对应的内存地址相同,反之则不同。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
| from mxnet import nd
x = nd.arange(12) x = x.reshape((3,4)) y = nd.array([[1,2,3,4], [2,3,5,6], [2,5,7,8]])
before = id(y) y = x + y print(id(y) == before)
z = y.zeros_like() before = id(z) z[:] = x + y print(id(z) == before)
nd.elemwise_add(x, y, out=z) print(id(z) == before)
before = id(x) x += y print(id(x) == before)
|
NDArray 和 NumPy互相转化
1 2 3 4 5 6 7 8 9
| import numpy as np from mxnet import nd
p = np.ones((2,3)) d = nd.array(p)
print(d)
print(d.asnumpy())
|