Replace Comma With Dot Pandas
Given the following array, I want to replace commas with dots: array(['0,140711', '0,140711', '0,0999', '0,0999', '0,001', '0,001', '0,140711', '0,140711', '0,140711', '0,14
Solution 1:
You need to assign the result of your operate back as the operation isn't inplace, besides you can use apply or stack and unstack with vectorised str.replace to do this quicker:
In[5]:
df.apply(lambda x: x.str.replace(',','.'))
Out[5]:
1-81-7H00.1407110.140711H10.09990.0999H20.0010.001H30.1407110.140711H40.1407110.140711H50.1407110.140711H600H700H80.1407110.140711H90.1407110.140711H100.1407110.1125688H110.1407110.1125688H120.1407110.1125688H130.1407110.1125688H140.1407110.140711H150.1407110.140711H160.1407110.140711H170.1407110.140711H180.1407110.140711H190.1407110.140711H200.1407110.140711H210.1407110.140711H220.1407110.140711H230.1407110.140711In[4]:
df.stack().str.replace(',','.').unstack()
Out[4]:
1-81-7H00.1407110.140711H10.09990.0999H20.0010.001H30.1407110.140711H40.1407110.140711H50.1407110.140711H600H700H80.1407110.140711H90.1407110.140711H100.1407110.1125688H110.1407110.1125688H120.1407110.1125688H130.1407110.1125688H140.1407110.140711H150.1407110.140711H160.1407110.140711H170.1407110.140711H180.1407110.140711H190.1407110.140711H200.1407110.140711H210.1407110.140711H220.1407110.140711H230.1407110.140711the key thing here is to assign back the result:
df = df.stack().str.replace(',','.').unstack()
Solution 2:
If you are reading in with read_csv, you can specify how it interprets decimals with the decimal parameter.
e.g.
your_df = pd.read_csv('/your_path/your_file.csv',sep=';',decimal=',')
From the man pages:
thousands : str, optional Thousands separator.
decimal : str, default ‘.’ Character to recognize as decimal point (e.g. use ‘,’ for European data).
Solution 3:
If you need to replace commas with dots in particular columns, try
data["column_name"]=data["column_name"].str.replace(',','.')
to avoid 'str' object has no attribute 'str' error.
Post a Comment for "Replace Comma With Dot Pandas"