Posts

Error for pip install Pillow on Ubuntu virtualenv

Issue: Failed building wheel for Pillow Running setup.py clean for Pillow Solution: sudo apt-get install python-dev sudo apt-get install libjpeg8-dev sudo ln -s /usr/lib/x86_64-linux-gnu/libjpeg.so /usr/lib pip install pillow

Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-y2kyiV/psycopg2/

Issue : Installing sentry Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-y2kyiV/psycopg2/ ​Solution:  ​ Python 2 sudo apt install libpq-dev python-dev Python 3 sudo apt install libpq-dev python3-dev ​

fixing for error - postdrop: warning: unable to look up public/pickup: No such file or directory.

I installed postfix and got this error: postdrop: warning: unable to look up public/pickup: No such file or directory. Turns out that sendmail was previously installed and that was messing things up. I had to stop sendmail and make the appropriate directory and restart postfix. Specifically: mkfifo /var/spool/postfix/public/pickup ps aux | grep mail kill  sudo /etc/init.d/postfix restart Test emailing  #email youremail@abc.com (enter) subject: test (enter) test (ctrl+d) cc: youremail@abc.com (enter) if your receive above then email everything is fine now! :)

Using a value file in a parameter set in Information Server DataStage

Image
Question How do I create and use a value file within a parameter set in DataStage? Answer Using a value file in a parameter set allows you to set values of a parameter dynamically. For Example: Job A updates the value file. Job B uses a parameter set that points to that value file. When moving jobs from development to test or production you can update the value file to reflect different values for the different environments without having to recompile the job. To create a new Parameter Set, select File, New and select "Create new Parameter Set" This will Launch a Dialog shown below: Fill out the appropriate information on the General tab and the proceed to the Parameters Tab: In this tab, enter in the Parameters you wish to include in this Parameter Set. Note that you can also add existing Environmental Variables. The last tab, Values, allows you to specify a Value File name. This is the name of the file that will automatically be created on the Engine tier. This tab also all...

A DataStage job does not use the new value that is put in the Parameter set.

Problem(Abstract) A DataStage job does not use the new value that is put in the Parameter set. Cause If you make any changes to a parameter set object, these changes will be reflected in job designs that use this object up until the time the job is compiled. The parameters that a job is compiled with are the ones that will be available when the job is run (although if you change the design after compilation the job will once again link to the current version of the parameter set). Diagnosing the problem Examine the log entry "Environment variable settings" for parameter sets. If the parameter set specifies the value "(As predefined)", the parameter set is using the value that was used during the last compile. Resolving the problem If the value of the parameter set may be changed, you should specify a Value File for the parameters or set the parameters in the parameter set (including encrypted parameters) to $PROJDEF. ​​

How to set default values for Environment Variables without re-compiling DataStage jobs

Question Is it possible to set/change the default values for an environment variable without re-compiling the DataStage job? Answer Yes, it is possible to set/change the default values for an environment variables without recompiling the job. You can manage all of your environment variables from the DataStage Administrator client. To do this follow these steps: Open the Administrator client and select the project you are working in and click Properties. On the General tab click Environment. Create a new environment variable under the "User Defined" section or update the variable if it exists. Set the value of the variable to what want the DataStage job to inherit. Once you do this, close the Administrator Client so the variable is saved. Next, open the DataStage Designer client and navigate to the Job Properties. Add the environment variable name that you just created in the DataStage Administrator Client. Set the value of the new variable to $PROJDEF. This will inherit whate...

How to fix missing and underreplicated blocks - HDFS

$ su - < $hdfs_user >   $ hdfs fsck / | grep 'Under replicated' | awk - F ':' '{print $1}' >> /tmp/ under_replicated_files $ for hdfsfile in `cat /tmp/under_replicated_files` ; do echo "Fixing $hdfsfile :" ; hadoop fs - setrep 3 $hdfsfile ; done In above command, 3 is the replication factor. If you are using single datanode, it must be 1.