Ⅰ 怎么得到python中归一化直方图横坐标的对应值
a=plt.hist()
a[0]就是bins的高度,a[1]就是bins的列表
Ⅱ python数据归一化的函数吗
目测是autonorm.py中lin 17 normdataset=zeros(shape(dataset)) 这一句 shape(dataset)返回的是元组,但是zeros( args )需要的是整形参数,做个类型转换就ok了
Ⅲ 数据的归一化处理
归一化的具体作用是归纳统一样本的统计分布性。归一化在0-1之间是统计的概率分布,归一化在某个区间上是统计的坐标分布。归一化有同一、统一和合一的意思。
1、(0,1)标准化:
这是最简单也是最容易想到的方法,通过遍历feature vector里的每一个数据,将Max和Min的记录下来,并通过Max-Min作为基数(即Min=0,Max=1)进行数据的归一化处理:
LaTex:{x}_{normalization}=frac{x-Min}{Max-Min}
Python实现:
Ⅳ python怎么做均值方差归一化
可以用线性归一化,就是找到最大值和最小值。
平均数是表示一组数据集中趋势的量数,是指在一组数据中所有数据之和再除以这组数据的个数。它是反映数据集中趋势的一项指标。解答平均数应用题的关键在于确定“总数量”以及和总数量对应的总份数。在统计工作中,平均数(均值)和标准差是描述数据资料集中趋势和离散程度的两个最重要的测度值。
Ⅳ 在Python中,能不能在不使用sklearn包的情况下对数据进行归一化处理
用定义呗,
取 data.max ,data.min
然后对 所有元素 取 (x- data.min ) / (data.max - data.min) 就可以了
Ⅵ 怎么看python中逻辑回归输出的解释
以下为python代码,由于训练数据比较少,这边使用了批处理梯度下降法,没有使用增量梯度下降法。
##author:lijiayan##data:2016/10/27
##name:logReg.pyfrom numpy import *import matplotlib.pyplot as pltdef loadData(filename):
data = loadtxt(filename)
m,n = data.shape print 'the number of examples:',m print 'the number of features:',n-1 x = data[:,0:n-1]
y = data[:,n-1:n] return x,y#the sigmoid functiondef sigmoid(z): return 1.0 / (1 + exp(-z))#the cost functiondef costfunction(y,h):
y = array(y)
h = array(h)
J = sum(y*log(h))+sum((1-y)*log(1-h)) return J# the batch gradient descent algrithmdef gradescent(x,y):
m,n = shape(x) #m: number of training example; n: number of features x = c_[ones(m),x] #add x0 x = mat(x) # to matrix y = mat(y)
a = 0.0000025 # learning rate maxcycle = 4000 theta = zeros((n+1,1)) #initial theta J = [] for i in range(maxcycle):
h = sigmoid(x*theta)
theta = theta + a * (x.T)*(y-h)
cost = costfunction(y,h)
J.append(cost)
plt.plot(J)
plt.show() return theta,cost#the stochastic gradient descent (m should be large,if you want the result is good)def stocGraddescent(x,y):
m,n = shape(x) #m: number of training example; n: number of features x = c_[ones(m),x] #add x0 x = mat(x) # to matrix y = mat(y)
a = 0.01 # learning rate theta = ones((n+1,1)) #initial theta J = [] for i in range(m):
h = sigmoid(x[i]*theta)
theta = theta + a * x[i].transpose()*(y[i]-h)
cost = costfunction(y,h)
J.append(cost)
plt.plot(J)
plt.show() return theta,cost#plot the decision boundarydef plotbestfit(x,y,theta):
plt.plot(x[:,0:1][where(y==1)],x[:,1:2][where(y==1)],'ro')
plt.plot(x[:,0:1][where(y!=1)],x[:,1:2][where(y!=1)],'bx')
x1= arange(-4,4,0.1)
x2 =(-float(theta[0])-float(theta[1])*x1) /float(theta[2])
plt.plot(x1,x2)
plt.xlabel('x1')
plt.ylabel(('x2'))
plt.show()def classifyVector(inX,theta):
prob = sigmoid((inX*theta).sum(1)) return where(prob >= 0.5, 1, 0)def accuracy(x, y, theta):
m = shape(y)[0]
x = c_[ones(m),x]
y_p = classifyVector(x,theta)
accuracy = sum(y_p==y)/float(m) return accuracy
调用上面代码:
from logReg import *
x,y = loadData("horseColicTraining.txt")
theta,cost = gradescent(x,y)print 'J:',cost
ac_train = accuracy(x, y, theta)print 'accuracy of the training examples:', ac_train
x_test,y_test = loadData('horseColicTest.txt')
ac_test = accuracy(x_test, y_test, theta)print 'accuracy of the test examples:', ac_test
学习速率=0.0000025,迭代次数=4000时的结果:
似然函数走势(J = sum(y*log(h))+sum((1-y)*log(1-h))),似然函数是求最大值,一般是要稳定了才算最好。
从上面这个例子,我们可以看到对特征进行归一化操作的重要性。
Ⅶ 怎么将一组数据归一化到(0,1)之间,用matlab编程
很简单,用函数mapminmax,文档太长我就不翻译了,只提醒几个关键
1 默认的map范围是[-1, 1],所以如果需要[0, 1],则按这样的格式提供参数:
MappedData = mapminmax(OriginalData, 0, 1);
2 只按行归一化,如果是矩阵,则每行各自归一化,如果需要对整个矩阵归一化,用如下方法:
FlattenedData = OriginalData(:)'; % 展开矩阵为一列,然后转置为一行。
MappedFlattened = mapminmax(FlattenedData, 0, 1); % 归一化。
MappedData = reshape(MappedFlattened, size(OriginalData)); % 还原为原始矩阵形式。此处不需转置回去,因为reshape恰好是按列重新排序
文档全文如下:
mapminmax
Process matrices by mapping row minimum and maximum values to [-1 1]
Syntax
[Y,PS] = mapminmax(YMIN,YMAX)
[Y,PS] = mapminmax(X,FP)
Y = mapminmax('apply',X,PS)
X = mapminmax('reverse',Y,PS)
dx_dy = mapminmax('dx',X,Y,PS)
dx_dy = mapminmax('dx',X,[],PS)
name = mapminmax('name');
fp = mapminmax('pdefaults');
names = mapminmax('pnames');
remconst('pcheck',FP);
Description
mapminmax processes matrices by normalizing the minimum and maximum values of each row to [YMIN, YMAX].
mapminmax(X,YMIN,YMAX) takes X and optional parameters
X
N x Q matrix or a 1 x TS row cell array of N x Q matrices
YMIN
Minimum value for each row of Y (default is -1)
YMAX
Maximum value for each row of Y (default is +1)
and returns
Y
Each M x Q matrix (where M == N) (optional)
PS
Process settings that allow consistent processing of values
mapminmax(X,FP) takes parameters as a struct: FP.ymin, FP.ymax.
mapminmax('apply',X,PS) returns Y, given X and settings PS.
mapminmax('reverse',Y,PS) returns X, given Y and settings PS.
mapminmax('dx',X,Y,PS) returns the M x N x Q derivative of Y with respect to X.
mapminmax('dx',X,[],PS) returns the derivative, less efficiently.
mapminmax('name') returns the name of this process method.
mapminmax('pdefaults') returns the default process parameter structure.
mapminmax('pdesc') returns the process parameter descriptions.
mapminmax('pcheck',FP) throws an error if any parameter is illegal.
Examples
Here is how to format a matrix so that the minimum and maximum values of each row are mapped to default interval [-1,+1].
*
x1 = [1 2 4; 1 1 1; 3 2 2; 0 0 0]
[y1,PS] = mapminmax(x1)
Next, apply the same processing settings to new values.
*
x2 = [5 2 3; 1 1 1; 6 7 3; 0 0 0]
y2 = mapminmax('apply',x2,PS)
Reverse the processing of y1 to get x1 again.
*
x1_again = mapminmax('reverse',y1,PS)
Algorithm
It is assumed that X has only finite real values, and that the elements of each row are not all equal.
*
y = (ymax-ymin)*(x-xmin)/(xmax-xmin) + ymin;
Ⅷ PYTHON实现对CSV文件多维不同单位数据的归一化处理
1)线性归一化
这种归一化比较适用在数值比较集中的情况,缺陷就是如果max和min不稳定,很容易使得归一化结果不稳定,使得后续的效果不稳定,实际使用中可以用经验常量来代替max和min。
2)标准差标准化
经过处理的数据符合标准正态分布,即均值为0,标准差为1。
3)非线性归一化
经常用在数据分化较大的场景,有些数值大,有些很小。通过一些数学函数,将原始值进行映射。该方法包括log、指数、反正切等。需要根据数据分布的情况,决定非线性函数的曲线。
log函数:x = lg(x)/lg(max)
反正切函数:x = atan(x)*2/pi
Python实现
线性归一化
定义数组:x = numpy.array(x)
获取二维数组列方向的最大值:x.max(axis = 0)
获取二维数组列方向的最小值:x.min(axis = 0)
对二维数组进行线性归一化:
def max_min_normalization(data_value, data_col_max_values, data_col_min_values):
""" Data normalization using max value and min value
Args:
data_value: The data to be normalized
data_col_max_values: The maximum value of data's columns
data_col_min_values: The minimum value of data's columns
"""
data_shape = data_value.shape
data_rows = data_shape[0]
data_cols = data_shape[1]
for i in xrange(0, data_rows, 1):
for j in xrange(0, data_cols, 1):
data_value[i][j] = \
(data_value[i][j] - data_col_min_values[j]) / \
(data_col_max_values[j] - data_col_min_values[j])
标准差归一化
定义数组:x = numpy.array(x)
获取二维数组列方向的均值:x.mean(axis = 0)
获取二维数组列方向的标准差:x.std(axis = 0)
对二维数组进行标准差归一化:
def standard_deviation_normalization(data_value, data_col_means,
data_col_standard_deviation):
""" Data normalization using standard deviation
Args:
data_value: The data to be normalized
data_col_means: The means of data's columns
data_col_standard_deviation: The variance of data's columns
"""
data_shape = data_value.shape
data_rows = data_shape[0]
data_cols = data_shape[1]
for i in xrange(0, data_rows, 1):
for j in xrange(0, data_cols, 1):
data_value[i][j] = \
(data_value[i][j] - data_col_means[j]) / \
data_col_standard_deviation[j]
非线性归一化(以lg为例)
定义数组:x = numpy.array(x)
获取二维数组列方向的最大值:x.max(axis=0)
获取二维数组每个元素的lg值:numpy.log10(x)
获取二维数组列方向的最大值的lg值:numpy.log10(x.max(axis=0))
对二维数组使用lg进行非线性归一化:
def nonlinearity_normalization_lg(data_value_after_lg,
data_col_max_values_after_lg):
""" Data normalization using lg
Args:
data_value_after_lg: The data to be normalized
data_col_max_values_after_lg: The maximum value of data's columns
"""
data_shape = data_value_after_lg.shape
data_rows = data_shape[0]
data_cols = data_shape[1]
for i in xrange(0, data_rows, 1):
for j in xrange(0, data_cols, 1):
data_value_after_lg[i][j] = \
data_value_after_lg[i][j] / data_col_max_values_after_lg[j]
Ⅸ python sklearn pca降维前需要数据归一化吗
不用
fromsklearn.decompositionimportPCA
pca=PCA(n_components=1)
newData=pca.fit_transform(data)
可以去这里看看,有详细说明。
http://doc.okbase.net/u012162613/archive/120946.html
Ⅹ 在python上数据归一化后怎样还原
看到各位大佬们都会把原始数据进行归一化,再处理。可是都没有人讲怎样把归一化的数据还原回来。
目前可找到的方法就只有matlab上的这个函数:
xtt = mapminmax('reverse',y1,ps)
在python上,就看到许多人推荐用sklearn进行归一化,但没有还原的方法呀。
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
你要问我为什么 要还原?
把日期和气温的数据放到模型里跑半天,想看看下一天的气温,结果出来一个0.837之类东西。
sklearn中transform用来归一化后,可以用inverse_transform还原。