Posts

Showing posts from February, 2015

Fix VNC Desktop Sharing on Ubuntu Desktop 14.04

Solution 1 sudo apt-get -y install dconf-tools dconf write / org / gnome / desktop / remote-access / require-encryption false / usr / lib / vino / vino-server --sm-disable start Solution 2 gsettings set org.gnome.Vino require-encryption false

Set time for 12 hour instead of 24 hour, in Lubuntu?

Right click it->Edit Digital Clock settings. Change clock format from %R to %r.

How can I lock screen on lxde

First (in a terminal) start your editor: vi ~/.config/openbox/lubuntu-rc.xml Then search for the mentioned faulty code: <keybind key="C-A-L"> <action name="Execute"> <command>xscreensaver-command -lock</command> </action> </keybind> and change it to use the dm-tool: <keybind key="C-A-L"> <action name="Execute"> <command>dm-tool lock</command> </action> </keybind> Then add a section, so the Windows+L combination works too: <keybind key="W-L"> <action name="Execute"> <command>dm-tool lock</command> </action> </keybind> Finally, finish the editor (saving the file) and activate it: openbox --reconfigure Good luck

How to verify if DataStage project has any corrupted files

Question If DataStage project or temp directory runs out of disk space, some of the buffered write requests might not get correctly completed and the project might ended up having corrupted hash files. There is one approach to check the project for corrupted hash files. Answer Run the uvbackup process and redirect the backup to null devices. The uvbackup produces an output file that can be used to identify the corrupted files. Here are the steps (please note that you must be logged in as DataStage Administrator): Source your dsenv file in $DSHOME (. ./dsenv) Go to your project directory (../InformationServer/Project/<project name>) List all files and direct them to a file (ls > myfiles.txt) - this is used to list of files for the uvbackup Run the uvbackup and redirect output to null with this command: "$DSHOME/bin/uvbackup -V -f -cmdfil myfiles.txt -s uvbackupout.txt -t /dev/null  2>&1 > testing123.txt" grep ...

Copy first n files in a different linux directory

​ find . -maxdepth 1 -type f |head -1000|xargs cp -t $destdir

What is RT_config file

​​ RT_CONFIGnn is a UniVerse table (dont hack it with a text editor). It contains the runtime configuration information for job number nn. Things like what links connect to what stages, what resources have to be notified when others finish, and so on. Obviously this information is used when the job is run. Most of it is put there when the job is compiled in Designer, though run-time defaults for job parameters can be set from Director.

SuspendedPropagated for HADR group

Problem(Abstract) The 'lssam' command is showing Control=SuspendedPropagated for HADR resource group when the database resource is shown as Failed offline on one node and Online on the other. What does this mean ? Environment Here's an example of the lssam output that shows this situation : Online IBM.ResourceGroup:db2_db2inst1_db2inst1_AM-rg Request=Lock Nominal=Online '- Online IBM.Application:db2_db2inst1_db2inst1_AM-rs Control=SuspendedPropagated |- Failed offline IBM.Application:db2_db2inst1_db2inst1_AM-rs:alx00005 Node=Offline '- Online IBM.Application:db2_db2inst1_db2inst1_AM-rs:alx00006 Resolving the problem It is expected behaviour to see the Control flag set to SuspendedPropogated when the resource group is locked. It is also expected behaviour to see the HADR resource group locked when it is not in a peer connected state. The DB2 engine is suppose to unlock the group once peer connected stat...

How to unlock a Tivoli SAMP resource group

Problem(Abstract) Tivoli SAMP resource group shows "Request=Lock", and Tivoli SAMP fails to automate resources in an IBM Smart Analytics System Cause The Tivoli SAMP Resource Group (RG) shows "Request=Lock" When a resource group is locked, Tivoli SAMP will not automate the resources. The lock prevents Tivoli SAMP from controlling the resource and will behave as if automation is turned off for any locked resource group. The "db2stop" command is an example of a known command to introduce a lock. Environment IBM Smart Analytics System with Tivoli SAMP enabled. Diagnosing the problem Use the lssam command to identify a locked resource. Note: You must run the "lssam" command as the root user to see locks. In the following example is a partial output of the "lssam" command. Notice the resource group "db2_db2admin01_0-rg" is locked. -----> lssa...

DataStage Job Compile - Receives "Failed to invoke GenRuntime using phantom process helper." error.

Problem(Abstract) When attempting to compile a job, user receives: Failed to invoke GenRuntime using phantom process helper. Cause Possible causes for this error include: Servers /tmp space was full Jobs status incorrect. Format problem with projects uvodbc,config file Corrupted DS_STAGETYPES file Internal locks. Diagnosing the problem If the steps under Resolving the problem do not resolve the problem, proceed with the following steps, Before opening a PMR with support, turn on server side tracing, attempt to compile the problem job, turn off server side tracing, and gather the tracing information. Turn on server side by connecting to the server with the DataStage Administrator client. High light the project which has the problem job. Click on the Properties button. In the Properties window, click the Tracing tab Click on the Enabled check box Click the OK button With a new DataStage Designer connection, attempt t...

What is the &PH& directory used for in DataStage and does it need to be cleaned out

Question What is the &PH& directory used for and does it need to be cleaned out? Answer In each project there is an &PH& directory. This is used to write entries by the phantom process and they have this form: DSD.RUN_InternalDate_InternalTime DSD.STAGERUN_ InternalDate_InternalTime This directory can become large and affect the performance. There is no exact number that could cause a problem due to variances in computing power. Generally this should be cleaned as regular maintenance. The more jobs running the quicker it will grow. You can check how many exist with the command: ls |wc -l. There are a couple ways to fix this problem: Log into Administrator-->Projects-->Command and type: CLEAR.FILE &PH& This command can SHOULD ONLY be run when you have no jobs running or users logged into DataStage clients. From $DSHOME: 1. source the dsenv file: . ./dsenv 2. type: ./bin/uvsh 3. type: LOGTO <Pr...

scp between two remote hosts from my local

In the past, the way in which scp worked, when called ( naively ) to copy files between remote systems, was very inconvenient: if you wrote for instance scp user1@remote1:/home/user1/file1.txt user2@remote2:/home/user2/file1.txt scp would first open an ssh session on remote1, and then it wouldrun scp from there to remote2. For this to work, you would have to set up the autohorization credentials for remote2 on remote1. The modern way to do it, instead, (modern because it has been implemented only a few years ago, and perhaps not everybody has a -3 -capable scp) requires two steps. First the use of the -3 option, as follows: scp -3 user1@remote1:/home/user1/file1.txt user2@remote:/home/user2/file1.txt The -3 option instructs scp to route traffic through the pc on which the command is issued, even though it is a 3rd party to the transfer. This way, authorization credentials must reside only on the issuing pc, the third party. The second nece...

Stop and Start OBIEE 11g Services in Linux

Stop Services: =============== 1.Stop opmnctl Navigate to <MiddlewareHome>/instances/instance1/bin ./opmnctl stopall 2.Stop Managed Server (bi_server1) Navigate to <MiddlewareHome>/user_projects/domains/bifoundation_domain/bin ./ stopManagedWebLogic.sh bi_server1 3.Stop Admin Server (weblogic) in the same above location ./ stopWebLogic.sh 4.Stop Node manger Just kill the Node Manager process ps -ef|grep node  –to find nodemanger pid kill -9 <nodemanager_pid> Note: If Managed and Admin server not stopped properly ,you can kill same way like above ps -ef|grep weblogic kill -9 <pid> Start Services: ====================== 1.Start Node Manager Navigate to <MiddlewareHome>/wlserver_10.3/server/bin nohup sh startNodeManager.sh & 2.Start Admin Server Navigate to <MiddlewareHome>/user_projects/domains/bifoundation_domain/bin nohup sh startWebLogic.sh -Dweblogic.management.username=weblogic -Dweblogic.ma...

NFS share nobody nobody

Edit /etc/idmapd.conf on both client and server. Near the top in the [General] section I uncommented the line #Domain = local.domain.edu and changed it to Domain=foo.home Restarted both server and client's rpc.idmapd.

Netezza S/W Architecture

Image
​ S/W Architecture The Netezza hardware components and intelligent system software are closely intertwined. The software  is designed to fully exploit the hardware capabilities of the appliance and incorporates numerous innovations to offer exponential performance gains, whether for simple inquiries, complex ad-hoc queries, or deep analytics. In this section, we examine the intelligence built into the system every step of the way. Netezza software components include: A sophisticated parallel optimizer that transforms queries to run more efficiently andensures that each component in every processing node is fully utilized An intelligent scheduler that keeps the system running at its peak throughput, regardless of workload Turbocharged Snippet Processors that efficiently execute multiple queries and complex analytics functions concurrently A smart network that makes moving large amounts of data through the Netezza system a breeze Let...

Netezza S-Blade

Image
​Commodity components and Netezza software combine to extract the utmost throughput from each MPP node. A dedicated high-speed interconnect from the storage array delivers data to memory as quickly as each disk can stream. Compressed data is cached in memory using a smart algorithm, which ensures that the most commonly accessed data is served right out of memory instead of requiring a disk access. FAST Engines (shown in Figure 2) running in parallel inside the FPGAs uncompress and filter out 95–98% of table data at physics speed, keeping only data needed to answer the query. The remaining data in the stream is processed concurrently by CPU cores, also running in parallel. The process is repeated on more than a thousand of these parallel Snippet Processors running in the Netezza appliance. The FPGA is a critical enabler of the price-performance advantages of the Netezza platform. Each FPGA contains embedded engines that perform filtering and transformation ...

Netezza AMPP

Image
​​A major part of the Netezza solution's performance advantage comes from its unique AMPP architecture (shown in Figure 1), which combines an SMP front end with a shared nothing MPP back end for query processing. Each component of the architecture is carefully chosen and integrated to yield a balanced overall system. Every processing element operates on multiple data streams, filtering out extraneous data as early as possible. More than a thousand of these customized MPP streams work together to divide and conquer the workload. Let's examine the key building blocks of the appliance: Netezza hosts The SMP hosts are high-performance Linux servers set up in an active-passive configuration for high availability. The active host presents a standardized interface to external tools and applications. It compiles SQL queries into executable code segments called snippets, creates optimized query plans, and distributes the snippets to the MPP nodes fo...

Netezza H/W Architecture

Netezza follows Asymmetric Massively Parallel Processing (AMPP) architecture. Architectural principles The Netezza appliances integrate database, processing, and storage in a compact system optimized for analytical processing and designed for flexible growth. The system architecture is based on the following core tenets that have been a hallmark of Netezza leadership in the industry: Processing close to the data source Balanced massively parallel architecture Platform for advanced analytics Appliance simplicity Accelerated innovation and performance improvements Flexible configurations and extreme scalability Processing close to the data source The Netezza architecture is based on a fundamental computer science principle: when operating on large data sets, do not move data unless absolutely necessary. The Netezza fully exploits this principle by utilizing commodity components called Field Programmable Gate Arrays (FPGAs) to filter out...

Netezza Introduction

The IBM® Netezza appliance is a test and development system and packs the performance and simplicity of Netezza's unique architecture into a compact footprint. The IBM Netezza appliance soffers customers an economical platform to develop and test their Business Intelligence (BI) and advanced analytic applications. It also shares the same characteristics as its enterprise-class counterpart of simplicity, ease of deployment and use and hardware-based acceleration of analytic queries and workloads. Simplicity The IBM Netezza is an easy-to-use appliance that requires minimal tuning and administration, speeding up application development. It is delivered ready-to-go for immediate data loading and query execution and integrates with leading ETL, BI and analytic applications through standard ODBC, JDBC and OLE DB interfaces. Performance The IBM Netezza system's performance advantage comes from IBM's unique Asymmetric Massively Parallel Processin...