make: f2py: 没有那个文件或目录
YoungEnzo opened this issue · 23 comments
I encountered a problem while installing Pyroms, please help me! Thank you! I have already installed all the necessary libraries, but I encountered a problem when installing Scrip after downloading Pyroms.
the neccessary libraries:
from mpl_toolkits.basemap import Basemap
import lpsolve55
mpl_toolkits
I found an error while making sudo:
(base) zeng@zeng-virtual-machine:~/zeng/pyroms/pyroms/external/scrip/source$ sudo make
f2py --fcompiler=gnu95 --f90exec=gfortran --f90flags='-g -fdefault-real-8 -ffixed-form -O2 -fPIC'
-L. -L/usr/lib -lnetcdf -lnetcdff -ljpeg -lmfhdf -ldf -lhdf5 -lhdf5_fortran -lhdf5_hl -lhdf5hl_fortran -I. -I/usr/include -m scrip -c kinds_mod.o constants.o iounits.o netcdf.o grids.o remap_vars.o remap_distwgt.o remap_conserv.o remap_bilinear.o remap_bicubic.o remap_read.o remap.o timers.o remap_write.o ./pyscrip.f90
make: f2py: 没有那个文件或目录 (There is no such file or directory)
make: *** [makefile:120:scrip_ext] 错误 127
To my knowledge, f2py is included in numpy, and I have also provided variables:
export f2py=/home/zeng/anaconda3/envs/py2/lib/python2.7/site-packages/numpy/f2py/
but it didn't work..
I am not very familiar with Linux, so this error has made me crazy. Is anyone willing to help me? Thank you very much!!
As I've said before, I have totally given up on scrip. Even if it compiles, it doesn't load.
Instead, use xesmf and the Arctic_HYCOM_GLBy example.
Wow, Kate, thanks for your quick reply and your contrubutions in roms or pyroms which really help me a lot in roms learning. You have absolutely lighten the world! haha..
I used the HYCOM_GLBY example to create my input file before, but recently my hard drive broke, so I had to reinstall Pyroms. I didn't encounter this error during my previous installation, so I'm worried it's a version issue. I used to think that you had to install Scrip before you could install pyroms. But after I import pyroms, the terminal did not report an error. So it seems that installing pyroms is not related to installing Scrip? If I successfully imprt pyroms, it means that I have successfully installed pyroms and can use it normally, right?
Did you download a fresh pyroms? I told it to stop loading scrip when loading pyroms. Yes, you can use pyroms, but not the scrip remapping part of it.
Yes, I download the new pyroms by clone git.... And I need remapping. because I remember that when I create the input files (bc, ic, and clm), I fixed the remap*.py scripts to create input files. How can I download the scrip...Or can I copy the whole pyroms directory to my new hard drive from the broke one. I am afraid the copied directory cannot build the connection with my new hard drive. Dear Kate, could you please give me some suggestions? Thank you!
You need a fresh conda environment with xesmf in it. At least sometimes, it has worked best to ask for that package first. It will allow you to do remapping, along the lines of the HYCOM_GLBy example. That's what I use now. If you can get xesmf to work, you don't need scrip ever again.
Thank you, Kate, but I wanna know, do I need modify the remap*.py? Because One neccessary library has changed from SCRIP to XESMF. Or, I can just run my previous remap*.py scripts and then they will create the input files directly, right?
Well, no, you do have to change your scripts.
Dear Kate, is there some documentations which can guide me how to modify the code? Because I am also the new in python...Sorry to bother you with these simple questions.
What can I say? I asked the original pyroms author for more documentation and he said no. He has since moved on, no longer works with ROMS.
I no longer work with ROMS.
I am also an idiot in Python. I'm currently hacking some Python code in a jupyter notebook, adding print statements. Do you know how to run a jupyter notebook? Or do a web search on the Python debugger? Try things and watch it fail, learning every time.
Thank you ,Kate, I didn't ues the JUPYTER before, but ater I searched it in the Internet, I know it.s kind of python IDE, seems like it can help you with solving error and coding? I will try to use it, thank you again. But before this, I'd like to downgrade the NUMPY. Because in my new OS, is UBUNTU22.04 and all the libraries I use are the latest, which may lead a crash to pyroms.... There is no reason why I can install SCRIP successfully befor, but it didn't work now.. So I guess creat the same environment is also kind of solution.. If everything is usefulless.. I can noly use the XESMF and modify the code which is a bit difficult for me.
Dear Kate, I am trying to use the xESMF and example Arctic_HYCOM_GLBy now. I used the example Palau_HYCOM beore, and I got the merged .nc file: HYCOM_GLBy0.08_2022_188.nc by running the scripts in Palau_HYCOM. I wanna ask that can I use the .nc file directly and then run scripts in Arctic_HYCOM_GLBy for remapping and creating bdry/ic/clm files?
And I also found that, Kate you said that we should not run the make_remap_weights_file.py if we use the Arctic_HYCOM_GLBy example, right?
Yes, you download HYCOM on the HYCOM grid, then remap to your grid using the Arctic_HYCOM_GLBy example. This example should not have the make_remap_weights.py - if it does, I forgot to delete it.
Dear Kate, I'm sorry to bother you again, I encountered a problem..I want to use ROMS do the simulations from 2022-7-1 to 2022-7-7, but at the first step of the simulation, it shows the simulation time is 2022-7-7. After simulating July 7th, the mode was terminated at 2022-7-8 00:00:00. After I checked the input files, I found the initial .nc file only has data for July 7th.
So I try to re-create the input files with Pyroms, when I run the python get_hycom_GLBy0.08_salt/ssh/temp/u/v_2022.py, the terminal display:
(base) enzo@enzo:~/pyroms/inputfiles/yangjiang$ python get_hycom_GLBy0.08_temp_2022.py
Processing file for water_temp, day 01, month 07, year 2022
Got water_temp from server...
Processing file for water_temp, day 02, month 07, year 2022
Got water_temp from server...
Processing file for water_temp, day 03, month 07, year 2022
Got water_temp from server...
Processing file for water_temp, day 04, month 07, year 2022
Got water_temp from server...
Processing file for water_temp, day 05, month 07, year 2022
Got water_temp from server...
Processing file for water_temp, day 06, month 07, year 2022
Got water_temp from server...
Processing file for water_temp, day 07, month 07, year 2022
Got water_temp from server...
Write with file data/HYCOM_GLBy0.08_temp_2022_188.nc
Done with file data/HYCOM_GLBy0.08_temp_2022_188.nc
But when I checked the HYCOM_GLBy0.08_temp_2022_188.nc, I found that the .nc file has only one ocean_time which is 44747.0 (which means 2022-7-7), and I think it should contain seven ocean_time values, from 44741.0 to 44747.0.
Ocean time 1 of 1=44747.0 days since 1900-01-01 00:00:00
I don't know what's wrong with it, and my code is shown below, dear Kate, could you please point out the reason? Thank you so much!
import matplotlib
matplotlib.use('Agg')
import numpy as np
import netCDF4
from datetime import datetime
import pyroms
import pyroms_toolbox
import sys
import pandas as pd
def create_HYCOM_file(name, time, lon, lat, z, var):
print('Write with file %s' %name)
#create netCDF file
nc = netCDF4.Dataset(name, 'w', format='NETCDF3_64BIT')
nc.Author = sys._getframe().f_code.co_name
nc.Created = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
nc.title = 'HYCOM + NCODA Global 1/12 Analysis (GLBy0.08) 3 hourly'
#create dimensions
Mp, Lp = lon.shape
N = len(z)
nc.createDimension('lon', Lp)
nc.createDimension('lat', Mp)
nc.createDimension('z', N)
nc.createDimension('ocean_time', None)
#create variables
nc.createVariable('lon', 'f', ('lat', 'lon'))
nc.variables['lon'].long_name = 'longitude'
nc.variables['lon'].units = 'degrees_east'
nc.variables['lon'][:] = lon
nc.createVariable('lat', 'f', ('lat', 'lon'))
nc.variables['lat'].long_name = 'latitude'
nc.variables['lat'].units = 'degrees_north'
nc.variables['lat'][:] = lat
nc.createVariable('z', 'f', ('z'))
nc.variables['z'].long_name = 'depth'
nc.variables['z'].units = 'meter'
nc.variables['z'][:] = z
nc.createVariable('ocean_time', 'f', ('ocean_time'))
nc.variables['ocean_time'].units = 'days since 1900-01-01 00:00:00'
nc.variables['ocean_time'].calendar = 'LEAP'
nc.variables['ocean_time'][0] = time
nc.createVariable(outvarname, 'f', ('ocean_time', 'z', 'lat', 'lon'), fill_value=spval)
nc.variables[outvarname].long_name = long_name
nc.variables[outvarname].units = units
nc.variables[outvarname].coordinates = 'lon lat'
nc.variables[outvarname][0] = var
nc.close()
print('Done with file %s' %name)
# get HYCOM Northeast Pacific data from 2007 to 2011
year = 2022
retry='False'
invarname = 'water_temp'
outvarname = 'temp'
year_tag = '%04d' %year
#month_tag = '01'
#day_tag = '01'
#date_tag = year_tag + month_tag + day_tag
#read grid and variable attributes from the first file
url='https://tds.hycom.org/thredds/dodsC/datasets/GLBy0.08/expt_93.0/data/hindcasts/2022/hycom_glby_930_2022070112_t000_ts3z.nc'
dataset = netCDF4.Dataset(url)
lon = dataset.variables['lon'][1381:1423]
repetitions_lon = len(dataset.variables['lat'][2506:2558])
lon = np.tile(lon,(repetitions_lon,1))
lon = np.asarray(lon)
lat = dataset.variables['lat'][2506:2558]
repetitions_lat = len(dataset.variables['lon'][1381:1423])
lat = np.transpose([lat]*repetitions_lat)
z = dataset.variables['depth'][:]
#spval = dataset.variables[invarname]._FillValue
units = dataset.variables[invarname].units
long_name = dataset.variables[invarname].long_name
dataset.close()
retry_day = []
####
if year%4 == 0:
# daysinyear = 366
daysinmonth = ([0, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31])
else:
# daysinyear = 365
daysinmonth = ([0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31])
# loop over daily files
for month in range(7,7+1):
month_tag = '%02d' %(month)
for day in range(1,7+1):
day_tag = '%02d' %(day)
date_tag = year_tag + month_tag + day_tag
print('Processing file for %s, day %02d, month %02d, year %04d' %(invarname,day,month,year))
url='http://tds.hycom.org/thredds/dodsC/datasets/GLBy0.08/expt_93.0/data/hindcasts/'+ year_tag +'/hycom_glby_930_'+ date_tag +'12_t000_ts3z.nc' #Changed URL according to previous url
#get data from server
try:
dataset = netCDF4.Dataset(url)
var = dataset.variables[invarname][0,:,2506:2558,1381:1423]
#Range are set obtain from MATLAB
spval = var.get_fill_value()
dataset.close()
print('Got %s from server...' %invarname)
except:
print('No file on the server... We skip this day.')
retry_day.append(day)
continue
#create netCDF file
date_dash = year_tag + '-' + month_tag + '-' + day_tag
period = pd.Period(date_dash, freq='H') #Using panda library to convert date format
day = period.dayofyear #Convert to Julian day
outfile = 'data/HYCOM_GLBy0.08_%s_%04d_%03d.nc' %(outvarname,year,day)
jday = pyroms_toolbox.date2jday(datetime(year, 1, 1)) + day - 1
create_HYCOM_file(outfile, jday, lon, lat, z, var)
Where did that script come from? I can't debug random scripts. The script I used to download HYCOM is in that Arctic_HYCOM_GLBy directory.
The initial file should have just one time - the initial time. What needs to have multiple times is the boundary file.
The HYCOM scripts download a lot of individual files. I would have day 188 for each field, then day 189 for each field, etc. I guess I didn't include the bash script to merge all the files for each day into one file for each day - but there are still separate daily files.
#!/bin/bash
#echo -n "Enter year to process: "; read year; echo $year
year=2023
leap=`echo $(($year % 4))`
if [[ $leap == 0 ]] ; then
nday=366
else
nday=365
fi
nday=37
#for day in $(eval echo {1..$nday}); do
for day in $(eval echo {11..$nday}); do
day=`echo $day | awk '{printf "%03d", $1}'`
echo $day
ncks -O HYCOM_GLBy0.08_ssh_${year}_${day}.nc HYCOM_GLBy0.08_${year}_${day}.nc
ncks -A HYCOM_GLBy0.08_temp_${year}_${day}.nc HYCOM_GLBy0.08_${year}_${day}.nc
ncks -A HYCOM_GLBy0.08_salt_${year}_${day}.nc HYCOM_GLBy0.08_${year}_${day}.nc
ncks -A HYCOM_GLBy0.08_u_${year}_${day}.nc HYCOM_GLBy0.08_${year}_${day}.nc
ncks -A HYCOM_GLBy0.08_v_${year}_${day}.nc HYCOM_GLBy0.08_${year}_${day}.nc
rm -f HYCOM_GLBy0.08_ssh_${year}_${day}.nc HYCOM_GLBy0.08_temp_${year}_${day}.nc HYCOM_GLBy0.08_salt_${year}_${day}.nc HYCOM_GLBy0.08_u_${year}_${day}.nc HYCOM_GLBy0.08_v_${year}_${day}.nc
done
Dear Kate,
①I modified the scripts followed other's recording vedios, it seems that these scripts can only download one day data. So I run get_hycom_GLBy0.08_salt/ssh/temp/u/v_2022.py for seven times to get one week data (2022-7-1~7). After running merge_HYCOM_GLBy0.08_daily.ksh, there are 7 files, HYCOM_GLBy0.08_2022_182-188.nc, then I run make_bdry_file.py, but I got seven boundary files(182-188). So, could I add all these 7 boundary files into roms.in which is similar as adding many forcings into roms.in ? Or, should I merge HYCOM_GLBy0.08_2022_182-188.nc into only one file and then run make_bdry_file.py?
② As for ic file, in my configuration, the simulation time from 2022-7-1 to 7-7, and my understanding is that I only need data of 1st July to create the initial file. Is it right?
③ At the same time, I used Arctic_HYCOM_GLBy directory to make input files. My simulation domain is (20.20 , 22.28 N ; 110.40 , 113.76 E). I modified the codes of get_hycom_GLBy0.08_salt_2019_rob.py:
import matplotlib
matplotlib.use('Agg')
import numpy as np
import netCDF4
from netCDF4 import num2date, date2num
from datetime import datetime, timedelta
import pyroms
import pyroms_toolbox
import os, sys, time
def create_HYCOM_file(name, time, lon, lat, z, vard, spval):
print('Write with file %s' %name)
#create netCDF file
nc = netCDF4.Dataset(name, 'w', format='NETCDF3_64BIT')
nc.Author = sys._getframe().f_code.co_name
nc.Created = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
nc.title = 'HYCOM + NCODA Global 1/12 Analysis (GLBy0.08) expt_93.0'
#create dimensions
#Mp, Lp = lon.shape
Lp = len(lon)
Mp = len(lat)
N = len(z)
nc.createDimension('lon', Lp)
nc.createDimension('lat', Mp)
nc.createDimension('z', N)
nc.createDimension('ocean_time', None)
#create variables
#nc.createVariable('lon', 'f', ('lat', 'lon'))
nc.createVariable('lon', 'f', ('lon'))
nc.variables['lon'].long_name = 'longitude'
nc.variables['lon'].units = 'degrees_east'
nc.variables['lon'][:] = lon
#nc.createVariable('lat', 'f', ('lat', 'lon'))
nc.createVariable('lat', 'f', ('lat'))
nc.variables['lat'].long_name = 'latitude'
nc.variables['lat'].units = 'degrees_north'
nc.variables['lat'][:] = lat
nc.createVariable('z', 'f', ('z'))
nc.variables['z'].long_name = 'depth'
nc.variables['z'].units = 'meter'
nc.variables['z'][:] = z
nc.createVariable('ocean_time', 'f', ('ocean_time'))
nc.variables['ocean_time'].units = 'days since 1900-01-01 00:00:00'
nc.variables['ocean_time'].calendar = 'gregorian'
nc.variables['ocean_time'][0] = time
nc.createVariable(outvarname, 'f', ('ocean_time', 'z', 'lat', 'lon'), fill_value=spval)
nc.variables[outvarname].long_name = long_name
nc.variables[outvarname].units = units
nc.variables[outvarname].coordinates = 'lon lat'
print("the vard shape is ", vard.shape)
print("target shape for the variable is ", nc.variables[outvarname].shape)
nc.variables[outvarname][0] = vard
nc.close()
print('Done with file %s' % name)
# get HYCOM Northeast Pacific data from 2019
year = 2022
invarname = 'salinity'
outvarname = 'salt'
#read grid and variable attributes from thredds server
url='https://tds.hycom.org/thredds/dodsC/datasets/GLBy0.08/expt_93.0/data/hindcasts/2022/hycom_glby_930_2022070112_t000_ts3z.nc'
dataset = netCDF4.Dataset(url)
#lon = dataset.variables['Longitude'][2100:,550:4040]
lon = dataset.variables['lon'][:]
lon_coord = np.where((110.40<=lon) & (lon<=113.76))
lonStart = lon_coord[0][0]
lonEnd = lon_coord[0][-1]+1
lon = dataset.variables['lon'][lonStart:lonEnd]
#lat = dataset.variables['Latitude'][2100:,550:4040]
# Lat 45.0+N
lat = dataset.variables['lat'][:]
lat_coord = np.where((20.20<=lat) & (lat<=22.28))
latStart = lat_coord[0][0]
latEnd = lat_coord[0][-1]+1
lat = dataset.variables['lat'][latStart:latEnd]
z = dataset.variables['depth'][:]
mt = dataset.variables['time'][:]
mtStart = mt[0]
mtEnd = mt[-1]
timeUnits = dataset.variables['time'].units
# Convert times and units to dates we can use
mtDates = num2date(mt,timeUnits)
units = dataset.variables[invarname].units
long_name = dataset.variables[invarname].long_name
dataset.close()
# loop over daily files
if year%4 == 0:
daysinyear = 366
else:
daysinyear = 365
# Use python dates to specify start and stop time
startTime = datetime(year, 7, 1, 0, 0)
iTime = startTime
endTime = datetime(year, 7, 7, 0, 0)
#endTime = datetime(year, 12, 31, 0, 0)
missingDates = [
]
# Convert start and end Time(s) to indicies from thredds server
#startNum = date2num(startTime,timeUnits)
#endNum = date2num(endTime,timeUnits)
# we assume numeric times within the dataset on the server are
# consecutive and that none are missing.
#if startNum < mtStart:
# print("Requested start: %s (index=%d)" % (startTime.strftime("%Y-%m-%d %H:%M:%S"),startNum))
# print("Dataset start: %s (index=%d)" % (mtDates[0].strftime("%Y-%m-%d %H:%M:%S"),mtStart))
# print("Dataset reported time units: %s" % (timeUnits))
# sys.exit("Start date is outside available time index of dataset.")
#if endNum > mtEnd:
# print("Requested end: %s (index=%d)" % (endTime.strftime("%Y-%m-%d %H:%M:%S"),endNum))
# print("Dataset end: %s (index=%d)" % (mtDates[-1].strftime("%Y-%m-%d %H:%M:%S"),mtEnd))
# print("Dataset reported time units: %s" % (timeUnits))
# sys.exit("End date is outside available time index of dataset.")
#sys.exit()
#import pdb; pdb.set_trace()
# Setup the correct indicies to request from the server
# 0 for thredds is the first index
#istartNum = int(startNum) - int(mtStart)
#iendNum = int(endNum) - int(mtStart)
# Thredds server limits amount of data that can be recalled at one time.
# For 3D fields, 13 chunks seems to work.
layerStart = [0, 3, 6, 9,12,15,18,21,24,27,30,33,36]
layerEnd = [3, 6, 9,12,15,18,21,24,27,30,33,36,40]
layerChunks = len(layerStart)
#for day in range(1,daysinyear+1):
#for datasetIndex in range(istartNum,iendNum+1):
while iTime <= endTime:
#nowNum = datasetIndex + int(mtStart)
# create netCDF output filename
#day = nowNum - int(date2num(datetime(year, 1, 1, 0, 0),timeUnits)) + 1
day = iTime.timetuple().tm_yday
outfile = 'data/HYCOM_GLBy0.08_%s_%04d_%03d.nc' %(outvarname,year,day)
# sometimes years have specific days missing
if iTime in missingDates:
iTime = iTime + timedelta(days=1)
continue
# skip files that exist
if os.path.isfile(outfile):
iTime = iTime + timedelta(days=1)
continue
print('Processing file for %s, %s' % (invarname, iTime.strftime("%Y-%m-%d")))
#get data from server
url='https://tds.hycom.org/thredds/dodsC/datasets/GLBy0.08/expt_93.0/data/hindcasts/%04d/hycom_glby_930_%s12_t000_ts3z.nc' % (year, iTime.strftime("%Y%m%d"))
writeFile = True
for layerChunk in range(0,layerChunks):
retries = 0
success = False
while success == False:
try:
dataset = netCDF4.Dataset(url)
var = dataset.variables[invarname][:,layerStart[layerChunk]:layerEnd[layerChunk],latStart:latEnd,:]
#print(var.shape)
# strip off time dimension
var = np.squeeze(var)
# Server sometimes delivers blocks back with all zeros.
# If server is returning zeros, try to give it a [90*number_of_failures second] break.
vmin = var.min()
vmax = var.max()
if vmin == 0.0 and vmax == 0.0:
print("Field arrived as all zeros, retrying...")
success = False
retries = retries + 1
time.sleep(90*retries)
continue
if layerChunk == 0:
vard = var
spval = var.get_fill_value()
else:
#print(vard.shape,var.shape)
vard = np.concatenate((vard,var),axis=0)
#print(vard.shape)
dataset.close()
print('Got %s(chunk %d of %d) from server...' % (invarname,layerChunk+1,layerChunks))
success = True
except:
# On any server failure, wait 120 seconds.
err = sys.exc_info()[1]
#import pdb; pdb.set_trace()
# If it is a missing data file, automatically skip it
if 'strerror' in dir(err) and err.strerror == 'NetCDF: file not found':
#iTime = iTime + timedelta(days=1)
writeFile = False
success = True
else:
print("Unexpected error:",err)
print('Server failed to deliver data, retrying...')
retries = retries + 1
time.sleep(120)
continue
if writeFile:
#create netCDF file
jday = pyroms_toolbox.date2jday(datetime(year, 1, 1)) + day - 1
create_HYCOM_file(outfile, jday, lon, lat, z, vard, spval)
else:
print("File not found on server %s" % (url))
iTime = iTime + timedelta(days=1)
After downloading data for 1st July , It reported an error. When I re-run it, it can download the data for 2nd July. But there is no data in the both two output file, dear Kate, could you please tell what the matter? Many thanks for your kind and warm help.
(base) enzo@enzo:~/pyroms/inputfiles/try$ python get_hycom_GLBy0.08_salt_2019_rob.py
Processing file for salinity, 2022-07-01
Got salinity(chunk 1 of 13) from server...
Got salinity(chunk 2 of 13) from server...
Got salinity(chunk 3 of 13) from server...
Got salinity(chunk 4 of 13) from server...
Got salinity(chunk 5 of 13) from server...
Got salinity(chunk 6 of 13) from server...
Got salinity(chunk 7 of 13) from server...
Got salinity(chunk 8 of 13) from server...
Got salinity(chunk 9 of 13) from server...
Got salinity(chunk 10 of 13) from server...
Got salinity(chunk 11 of 13) from server...
Got salinity(chunk 12 of 13) from server...
Got salinity(chunk 13 of 13) from server...
Write with file data/HYCOM_GLBy0.08_salt_2022_182.nc
Traceback (most recent call last):
File "src/netCDF4/_netCDF4.pyx", line 5490, in netCDF4._netCDF4.Variable.__setitem__
File "/home/enzo/anaconda3/lib/python3.10/site-packages/numpy/ma/core.py", line 3429, in shape
super(MaskedArray, type(self)).shape.__set__(self, shape)
ValueError: cannot reshape array of size 9360000 into shape (1,40,52,42)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/enzo/pyroms/inputfiles/try/get_hycom_GLBy0.08_salt_2019_rob.py", line 224, in <module>
create_HYCOM_file(outfile, jday, lon, lat, z, vard, spval)
File "/home/enzo/pyroms/inputfiles/try/get_hycom_GLBy0.08_salt_2019_rob.py", line 59, in create_HYCOM_file
nc.variables[outvarname][0] = vard
File "src/netCDF4/_netCDF4.pyx", line 5492, in netCDF4._netCDF4.Variable.__setitem__
File "<__array_function__ internals>", line 180, in broadcast_to
File "/home/enzo/anaconda3/lib/python3.10/site-packages/numpy/lib/stride_tricks.py", line 412, in broadcast_to
return _broadcast_to(array, shape, subok=subok, readonly=True)
File "/home/enzo/anaconda3/lib/python3.10/site-packages/numpy/lib/stride_tricks.py", line 348, in _broadcast_to
it = np.nditer(
ValueError: operands could not be broadcast together with remapped shapes [original->remapped]: (40,52,4500) and requested shape (1,40,52,42)
(base) enzo@enzo:~/pyroms/inputfiles/try$ python get_hycom_GLBy0.08_salt_2019_rob.py
Processing file for salinity, 2022-07-02
Got salinity(chunk 1 of 13) from server...
Got salinity(chunk 2 of 13) from server...
Got salinity(chunk 3 of 13) from server...
Got salinity(chunk 4 of 13) from server...
Got salinity(chunk 5 of 13) from server...
Got salinity(chunk 6 of 13) from server...
Got salinity(chunk 7 of 13) from server...
Got salinity(chunk 8 of 13) from server...
Got salinity(chunk 9 of 13) from server...
Got salinity(chunk 10 of 13) from server...
Got salinity(chunk 11 of 13) from server...
Got salinity(chunk 12 of 13) from server...
Got salinity(chunk 13 of 13) from server...
Write with file data/HYCOM_GLBy0.08_salt_2022_183.nc
the vard shape is (40, 52, 4500)
target shape for the variable is (1, 40, 52, 42)
Traceback (most recent call last):
File "src/netCDF4/_netCDF4.pyx", line 5490, in netCDF4._netCDF4.Variable.__setitem__
File "/home/enzo/anaconda3/lib/python3.10/site-packages/numpy/ma/core.py", line 3429, in shape
super(MaskedArray, type(self)).shape.__set__(self, shape)
ValueError: cannot reshape array of size 9360000 into shape (1,40,52,42)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/enzo/pyroms/inputfiles/try/get_hycom_GLBy0.08_salt_2019_rob.py", line 226, in <module>
create_HYCOM_file(outfile, jday, lon, lat, z, vard, spval)
File "/home/enzo/pyroms/inputfiles/try/get_hycom_GLBy0.08_salt_2019_rob.py", line 61, in create_HYCOM_file
nc.variables[outvarname][0] = vard
File "src/netCDF4/_netCDF4.pyx", line 5492, in netCDF4._netCDF4.Variable.__setitem__
File "<__array_function__ internals>", line 180, in broadcast_to
File "/home/enzo/anaconda3/lib/python3.10/site-packages/numpy/lib/stride_tricks.py", line 412, in broadcast_to
return _broadcast_to(array, shape, subok=subok, readonly=True)
File "/home/enzo/anaconda3/lib/python3.10/site-packages/numpy/lib/stride_tricks.py", line 348, in _broadcast_to
it = np.nditer(
ValueError: operands could not be broadcast together with remapped shapes [original->remapped]: (40,52,4500) and requested shape (1,40,52,42)
I am very sorry to trouble you with reading so much content, and I just want to express my sincere gratitude once again.
Time for some debugging, right? What is in these variables?
lonStart = lon_coord[0][0]
lonEnd = lon_coord[0][-1]+1
As for the other, I can run each get_hycom... script for a whole year at a time. I let each day stay as its own file in the hycom directory. Then the script to make the OBC files can loop over them. Somehow on the ROMS side, I get one OBC file per year. If it takes doing ncrcat from the NCO package to do that, then that's what I do. The https://nco.sourceforge.net/ NCO package is worth learning to use.
The chunking you see is it getting the file in parts because my Arctic domain is so very big that I can't download one day in one fetch - Rob worked out how to do the chunking for me. You probably don't need to do that.
Sorry, I added these variables into the code based my ideas. Because I want to limit the latitude range which I didn't found the corresponding content in get_hycom_GLBy0.08_salt_2019_rob.py, but it may leads the bug.. Should I limit the latitude range for my simulation domain?
Thank you for your suggestions, I will try to merge these bdry files with NCO and annotate the content about chunking.
The initial files still confuse me..The simulation time from 2022-7-1 to 7-7, and my understanding is that I only need data of 1st July to create the initial file. Is it right? I will take some tests later..
Thank you so much! Hope you have a nice day!
Yes, initial conditions are simply the model state at the beginning of the run. You only need the one time.
The code as is on github is the way it is for downloading a huge Arctic domain. It has the lat-lon ranges hard-coded, including needing all the longitudes. You can download a smaller patch, picking the lat-lon range however you like, as
long as it works.
Dear Kate, thank you sooooo much! It's late in your city, take a good care of yourself and have a good night! I am surprise about your picture, as it contains the LION DANCE, which is one kind of traditional art of my country, China.
You like that? Yes, I know a little about lion dance, also dragon dance. I'm too slow to be a dragon leg!
Haha it's interesting. I like them, they all represent our culture. I do think they are the sports which needs high level coorperations, skills, and physical strength. Perhaps due to competitiveness, Chinese young people prefers the sports such like basketball football badminton and table tennis. We can only watch the performance at traditional festival and we have almost no chance of encountering dragon and lion dances. Hope you enjoy them, haha
Hello YoungEnzo, I've encountered some problems during the installation of scrip, can I ask you for some advice. Can I add your contact information, such as QQ or WeChat? Thank you very much!!