Thursday, October 12, 2017

Dependency errors with yum update - libgpod

I tried to do a 'yum update' one day, a couple weeks had gone by so there was a big list of packages available. I know I can use yum-cron, but I like to be more in control and do it myself. Of course I used to do the same thing with Windows and eventually caved and let it update whenever, but for now I want to do it myself. :)  

Anyway, I was getting dependency errors on libgpod, which I don't need since I don't have Apple devices - it allows connections to an iPod. The following was the output:

yum update

[snip]

Errors:
Error: Package: libgpod-0.8.3-14.el7.x86_64 (@epel)
           Requires: libplist.so.1()(64bit)
           Removing: libplist-1.10-4.el7.x86_64 (@anaconda/7.2)
               libplist.so.1()(64bit)
           Updated By: libplist-1.12-3.el7.x86_64 (ol7_latest)
              ~libplist.so.3()(64bit)
Error: Package: libgpod-0.8.3-14.el7.x86_64 (@epel)
           Requires: libimobiledevice.so.4()(64bit)
           Removing: libimobiledevice-1.1.5-6.el7.x86_64 (@anaconda/7.2)
               libimobiledevice.so.4()(64bit)
           Updated By: libimobiledevice-1.2.0-1.el7.x86_64 (ol7_latest)
              ~libimobiledevice.so.6()(64bit)
Error: Package: libgpod-0.8.3-14.el7.x86_64 (@epel)
           Requires: libusbmuxd.so.2()(64bit)
           Removing: usbmuxd-1.0.8-11.el7.x86_64 (@anaconda/7.2)
               libusbmuxd.so.2()(64bit)
           Obsoleted By: usbmuxd-1.1.0-1.el7.x86_64 (ol7_latest)
               Not found
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

I poked into it a bit, but in the end there was no working around the root of the problem. I tried the two options suggested:

yum update --skip-broken

Well this worked, but just avoids the issue and leaves the problem for next time.

rpm -Va --nofiles --nodigest

This will verify installed packages to check for anything wrong, the --nofiles and --nodigest refers to ignoring file attributes and digest problems. This didn't help in my case - nothing useful was discovered.

In the end, I removed the problem package:

rpm -e --nodeps --allmatches libgpod
yum update

The options are to ignore dependencies, and to remove all versions of the package.

Voila.  Now if I had an iPod, it would've meant actually resolving the root problem. Not too sure exactly what it is, but I suspect the libgpod package would have to be rebuilt in epel due to some issue with its dependencies.

Addendum: you may want to also exclude the package from epel repositories. You can add: 

exclude=libgpod*

to the file:

/etc/yum.repos.d/epel.repo

Thursday, May 11, 2017

Three ways to determine the version of ADF libraries on a Weblogic server

Environment / assumptions for this post:
  • Oracle Enterprise Linux 7.2 (OEL)
  • Fusion Middleware Home with Weblogic 11.x or 12.x
  • ADF runtime libraries installed
Sometimes it isn't clear what version of ADF is on the server you may be trying to deploy to. The differences between 12.2.1.0 and 12.2.1.1 may be small but Oracle does fix bugs and it may apply to you. One clue is to go to the weblogic console and it shows the version right on the screen. This may not be correct of course, so to get the actual runtime ADF libraries you need to dig a bit. 

A couple different ways of doing this:
  1. Check the MANIFEST of adf-share-support.jar. 
    • Easy way: find the jar and list the MANIFEST
    • Easy but more fooferaw: use Oracle's printJarVersions script
  2. Use the PrintVersion class in oracle.jbo.common package.
  3. Examine ADF versions from Enterprise Manager -> deployments

First way: check MANIFEST of adf-share-support.jar

cd $MW_HOME/oracle_common/modules/oracle.adf.share
unzip -p adf-share-support.jar META-INF/MANIFEST.MF

Manifest-Version: 1.0
Ant-Version: Apache Ant 1.7.0RC1
Created-By: 25.40-b23 (Oracle Corporation)
Oracle-Version: 12.2.1.0.42.151011.0031
Oracle-Label: JDEVADF_MAIN_GENERIC_151011.0031.S
Oracle-Label-JDEVADF: JDEVADF_MAIN_GENERIC_151011.0031.S
Oracle-Builder: Official Builder
Oracle-Internal-Release: 12.2.1.0.0KMTEST-BP42
Oracle-BuildSystem: Linux - java - 1.8.0_40-ea-b19
Oracle-BuildTimestamp: 2015-10-11 04:35:53 -0700

This actually does the same thing, with slightly different output:

cd $MW_HOME/oracle_common/common/bin
./printJarVersions.sh  | grep 'adf-share-support.jar'

Second way: use PrintVersion class

cd $MW_HOME/oracle_common/modules/oracle.adf.model
java -cp adfm.jar oracle.jbo.common.PrintVersion

BC4J Version is:  12.2.1.0.31


This Oracle document 401694.1 has information on release version numbers - see the "42" in Oracle-Version of the first way, and "31" in the second way.

Third way: look at the deployment in Enterprise Manager

I'll use the Enterprise Manager application itself as an example. The Weblogic Console and the EM app change from version to version - these screenshots are from version 12.2.1. The trick here is to make sure you are in the "Application Deployments" part of the navigation tree, and not the "Weblogic Domain" part of the tree.

Step 1 - go to the correct part of the tree in Enterprise Manger:

Enterprise Manger Navigation Tree -
Start by selecting an ADF application deployment.


Step 2 - from the menu, select ADF -> Versions:

From the Deployments menu, select ADF and then Versions

Step 3 - filter to find what you're looking for.
I would try looking for *adfm* in the Jar name column, or maybe 12.2 in the "Oracle Version":

Filter on one of the important ADF jar files, such as adfm.jar.
You can try a filter like *adfm*.

Friday, May 5, 2017

Oracle Enterprise Linux side by side with Microsoft Windows using a software kvm

Environment / assumptions for this post:
  • Oracle Enterprise Linux 7.2 (OEL)
  • Windows 7
  • Both OEL and Windows have their own monitor
  • TigerVNC for Windows, version 1.7.1
  • x2vnc version 1.7.2
Goal: to use one keyboard and mouse between two physical computers and monitors without any hardware.

I was excited to switch from Windows to Linux (in my case Oracle Enterprise Linux, basically RedHat) for development, although I wasn't too thrilled about having two mice and two keyboards on my desk. I kept doing things like moving the Linux mouse and then typing on the Windows keyboard. There are many cheap KVM solutions, but I wanted to try a software solution. 

To make this work, you need to run a piece of software on the Linux side, and a VNC server on the Windows side. 

Step 1 - installing a VNC server on Windows

There are a few freely available VNC servers around, but we need one that specifically is backwards compatible with protocol version 3.3 - the version used by x2vnc. I had issues with TightVNC, so I tried TigerVNC. I found a 64bit Windows binary here:

https://github.com/TigerVNC/tigervnc/releases

Install the normal way. Make sure to change the port if required (default is 5900), and add a connection password. If you install as a service, it will restart when you boot.

Step 2 - installing x2vnc on OEL

This is a program written by Fredrik Hubinette. It was based on vncviewer code, and uses the RFB protocol, specifically version 3.3. As of this writing, the version is up to 3.8. More on this below. The steps are as follows:

Download the code from:

http://fredrik.hubbe.net/x2vnc.html

Extract into a temp directory:

cd my_temp_folder
tar -xvf x2vnc-1.7.2.tar.gz
cd x2vnc-1.7.2

Run configure:

./configure

Run make:

make

Do the install:

sudo make install

Don't forget to put this in your .xinitrc if you want it to run every time you log in to your graphical environment.

It is probably a good idea to put the VNC server connection password into a file. The first time you run the program, if you specify the filename it will ask you for the password and create it for you. It will create the password with permissions 600. In the following, I use "west" since my physical Windows monitor is on the left of my Linux desktop, so I want my mouse to roll off to the left, or west direction.

x2vnc -passwdfile ~/.vncpasswd -west 192.168.88.88:0

Finally, to get this to run automatically when you log in (Gnome in my case), you need to add a .desktop file to the autostart folder. The location is here:

~/.config/autostart/filename.desktop

And the file should contain something like this:

[Desktop Entry]
Name=x2vnc
GenericName=Connects to VNC on another server for screen sharing
Comment=This is basically a software KVM, but for mouse and keyboard only. 
Exec=/home/jjames/scripts/x2vnc.sh
Terminal=false
Type=Application
X-GNOME-Autostart-enabled=true

And the x2vnc.sh file is the following.
Note: we need the sleep 5, since the script has to run only when the desktop is fully loaded.

sleep 5
echo hello >~/.x2vnc.log
x2vnc -passwdfile ~/.vncpasswd -west 192.168.5.5:0 >>~/.x2vnc.log 2>&1 &

I had a few little glitchy issues that had workarounds:
  1. When sliding from Windows back to Linux, the Windows desktop program windows would appear as outlines. This may not matter, unless you are comparing files, or doing any other activity between the two monitors. I found a not-so-good solution was to click the Windows menu button in the bottom-left, and just leave it open. 
  2. To slide from Linux to Windows, you simply move the mouse in the direction you specified e.g. I chose West, which is left side. But I had a Gnome toolbar loaded, so I had to make sure the pointer was above the bar to work. 
  3. Occasionally, sliding over to Windows didn't work. The solution for me was to simply switch my Gnome from, say, desktop 2 to desktop 3 and try again.

Wednesday, April 26, 2017

Cleaning up your code in JDeveloper

Environment / assumptions for this post:
  • JDeveloper 12c
  • jdk7 / 8
I really like to see a little green box in the top of my fragments and source code.
The green All is Clear / No Issues Found indicator
This may be unnecessary and a lot of trouble to achieve, but there are things to be said for it:

  • it forces you to acknowledge and fix potential issues - some examples:
    • ui: deprecated components, such as af:commandButton when going from 11g to 12c
    • ui: 'escape' attribute on af:outputText
    • ui: typos, especially in method names for things like listeners and validators
    • java: unchecked cast and conversions
    • java: convention violations
    • etc 
  • you end up cleaning up as you go - unused variables and methods, unnecessary imports, etc.
  • satisfies one's obsessive-compulsive urge to have clean code with no outstanding little things you have to constantly ignore. :)
There are a few ways to get to the Green Nirvana:
  1. Fixing the actual problem!
  2. Suppressing the warning.
  3. Making JDev not check for the warning

Method 1 - Fix the problem

Nothing to say for this, except don't be lazy! 

Method 2 - Suppress the warning

Suppressing the warning is usually easy, since JDev's code assist will do it for you. 
Suppress a warning with JDev's Code Assist

Method 3 - change JDev's Auditing

Change JDev's auditing. This can be dangerous, but proceed if you are comfortable. In JDev Properties, find Audit, then Manage Profiles. Browse around and deslect what you don't want checked for, then save-as a new profile. Make sure the new profile is selected.
Managing JDev's Audit Profile


Tuesday, April 18, 2017

Coexisting at work with Oracle Enterprise Linux (OEL 7.2) and Microsoft Windows

Environment / assumptions for this post:
  • Oracle Enterprise Linux (OEL) 7.2
  • Windows 7 (yes, our organization is a bit behind)
  • Two physical machines + 2 sets of Monitor/Keyboard/Mouse
This is a very big post. I thought it may benefit me in some small way to take notes on what I did when assembling a Linux desktop - and just maybe save someone else a bit of trouble and avoid my mistakes.

It's a funny thing. For many years now of web development and integration projects, the environments have been almost exclusively Unix based servers and Windows desktops for the developers. We often don't have a choice, since corporate policy dictates what can be supported by the IT folks. There is considerable more flexibility where I work, thankfully. So why not do development on a similar platform to what I am ultimately developing for?

The Goals

A fully functioning Linux desktop, based on Oracle Enterprise Linux. This was chosen because most of our servers are OEL, so if I'm developing for them I may as well match as closely as possible. Additionally, I need to have the following capabilities:
  • Remote Desktop to the Linux environment when connecting from home;
  • Ability to print stuff, however infrequently I actually do this;
  • Have a fully functioning environment for JDeveloper and SQLDeveloper at the least;
  • Be able to connect to shared drives on the Corporate Windows network
  • Will keep the Windows desktop to the side with a second monitor, which will be the main machine for Outook and Microsoft Office tools. Yes there are ways to get this working on Linux but I don't need even small incompatibility hassles - everyone else at the workplace makes heavy use Outlook and Word.
  • To expand my knowledge of administering a Linux system. I'm a developer not an administrator, however having better Linux skills is helpful for troubleshooting among other things.
I was literally starting off with a brand new desktop, nothing installed. The ideal situation! I started off by downloading the latest OEL ISO images (version 7.2 as of this writing) from Oracle. Once the basic OS was installed, I started configuring things in no particular order. As I encountered missing software and libraries, I ended up going down various rabbit holes to solve - which is half the point of learning.

Oracle Linux 7.2 Install

  1. Boot off of USB, made bootable with the "V100082-01" ISO image.
  2. Go through each option one by one, starting with Network.
    1. if connected with Ethernet, should be able to use DHCP to get Gateway, DNS, etc. The hostname will have to be something given to you by your organization's IT guys. You may want to mark down your IP address and perhaps MAC as well, although you can fetch it easily enough later.
    2. specify Date & Time after connecting to the network
    3. specify installation destination
    4. specify Software Selection.
      • Select "Server with GUI" as Base Environment. I chose the following, but you may need more or less:
      • File & Storage Server
      • Hardware Monitoring
      • Java Platform
      • Network File System Client
      • Performance Tools
      • Remote Management for Linux
      • Development Tools
    5. enable kdump - enables crash dumps in the event of big problems
  3. Restart, accept EULA if necessary. Pick middle "unbreakable" kernel option if needed when rebooting.
  4. After a few minutes, you will automatically be prompted for Software Updates (OS, Firefox, etc).

Mounting shared drives

For mounts to Windows shares, we want:
  • uid to be a specific user when creating files, say jjames, uid=1000
  • gid to be a specific group when creating folders, say jjames (or users), gid=100
  • we want file and folder permissions to be, 644 and 755 respectively
  • we want to specify credentials in a protected place, say /etc/.samba/file_name
Other things to note:
  • normally we would use umask, but not for cifs mounts
  • specifying all the -o options in the credentials file does not work
  • you need the cifs-utils package installed (try: yum list installed | grep cifs)
  • the manual page is under mount.cifs(8)
This will work during runtime. Make sure to create the destination first. As root:

mkdir /mnt/dropbox
chmod 755 /mnt/dropbox
chown jjames:users /mnt/dropbox
mount -t cifs //172.16.3.34/dropbox /mnt/dropbox --verbose -o uid=1000,gid=100,file_mode=0644,dir_mode=0755,credentials=/root/.samba/rc4545_credentials

So the credentials file looks like this (permissions 600):

username=jjames
password=xxxxxxx


To mount when system boots, use /etc/fstab.  Here are two drives: a dropbox on my windows machine, and a Windows shared drive.

//192.168.5.5/dropbox /mnt/dropbox cifs user=jjames,password=xxxxx,uid=1000,gid=100 0 0
//fileserver/dir_name /mnt/n_drive cifs user=jjames,password=xxxxxxx,uid=1000,gid=100 0 0


Notes:
  • use \040 for a space in the folder name
  • similar options as to runtime -see manual page mount(8) and mount.cifs(8)
  • the fields at the end (two zeros) are whether to allow dump (zero=no), and whether to do fsck filesystem check at reboot time (zero=no)

To test, use the -a option to mount everything in fstab (along with verbose and f for fake i.e. to test):

mount -avf (first do this to test)
mount -a (to mount everything)

If you have problems (maybe the password changed), you can unmount and mount again:
umount /mnt/dropbox

Or if you suspect issues with cifs/samba:
umount -a -t cifs -l

Or even:
service smb restart

Adding a printer

Assuming it is a network printer

  • first find the IP of the printer e.g. http://xx.xx.xx.xx
  • go to localhost:631 for the CUPS interface in Firefox
  • click "Add Printer" and log in with root
  • step through the questions

Installing Chrome

Get the package:
wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm

Install the package:
sudo yum install ./google-chrome-stable_current_x86_64.rpm

Install CVS

Install CVS:
yum install cvs

Set CVSROOT:
export CVSROOT=:pserver:jjames@rc3210:/home/cvs

Install Meld - Graphical Diff Tool

Check under Gnome Applications / Programming.
note: there is a package called python-meld3, that is a templating engine.
yum install meld.noarch

Install xrdp

There are a lot of steps here, but nothing terribly complicated. A summary of what we need:

  1. Find and isntall the xrdp package.
  2. Install EPEL - (Extra Packages for Enterprise Linux - open source repository project)
  3. Install nux - this is an OpenGL widget library
  4. Install VNC
  5. Configure things to start on boot
  6. Configure the firewall

1. Find xrdp package

[root@myhost ssh]# yum search xrdp

Loaded plugins: langpacks, ulninfo
====== Matched: xrdp ======
freerdp.x86_64 : Remote Desktop Protocol client

[root@myhost ssh]# yum install freerdp.x86_64

Loaded plugins: langpacks, ulninfo

Resolving Dependencies

2. install epel

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install epel-release-latest-7.noarch.rpm

3. install nux

wget http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm
yum install nux-dextop-release-0-5.el7.nux.noarch.rpm

4. install VNC

yum -y install xrdp tigervnc-server

5. Do some configuring

Start the service (we'll add to system startup later)
systemctl start xrdp.service

Check if started with
netstat -tulpn | grep 3389

If failed to start, check status:
systemctl status xrdp.service

If failed, it may be a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1177202

Note: this all has to do with SELinux - "Security-enhanced Linux", a Kernel modifications originally
created by the NSA. It assigns a "context" to a user or process consisting of a username, role, and domain. You can also create policies, which is where "labels" are managed. Ugh. SELinux is actually supported by many distros now, including RHEL, CentOS, Fedora, Debian, etc. From the documentation, it is used "primarly to confine daemons that have clearly defined data access and activity rights. This limits potential harm when compromised...".

This might fix it. List security context with (do before and after):
ls -Z /usr/sbin/xrdp

Change the context:
chcon -t bin_t /usr/sbin/xrdp

Check again:
ls -Z /usr/sbin/xrdp

Also do this one:
chcon -t bin_t /usr/sbin/xrdp-sesman
ls -Z /usr/sbin/xrdp-sesman

Try starting again:
systemctl start xrdp.service

and confirm it is listening on the port:
netstat -tulpn | grep 3389

To enable service on startup:
systemctl enable xrdp.service

I had an error enabling, was an Access Denied error. There were no permission issues on folders, but I discovered that there was a symbolic link missing. These "units" i.e. services, have a link that points from:

/etc/systemd/system/service_name
   to
/usr/lib/systemd/system/service_name

So do the following if the link is missing:
[root@myhost system]# cd /etc/systemd/system
[root@myhost system]# ln -s /usr/lib/systemd/system/xrdp.service xrdp.service

...then try enabling again.

6. Finally, create iptables firewall rule.

Note 1: you can also, through the GUI, use the Firewall tool. Hit "super" key, search for Firewall. You need to create a 'permanent' port rule for 3389/tcp.

Note 2: firewalld is a front-end to iptables

From the command line, we need to add the rdp port and then reload:
firewall-cmd --permanent --zone=public --add-port=3389/tcp

(responds with success)

firewall-cmd --reload

(responds with success)

Notes  on firewalld rules:

  • rules can have two configurations: permanent or runtime
  • rules have a zone, such as public, dmz, external, trusted, etc.
  • ultimately you are allowing or disallowing ports and protocols (tcp/udp)


Start Remote Desktop on Windows - make sure Display Colours are reduced to 16bit or you may have issues.

Note - Sharing the active desktop

When I log into work from home, I would use RDP from Windows and it works pretty well. It shares the currently logged-in session and all is good. This above method works fine, however it starts a new session and doesn't show me the original. It seems there isn't any way to share the :0 desktop (or at least not easily?), but you can use VNC to approximate it.  The idea is:

  • connect to an X11 session (not a terminal connection) - it doesn't have to be :0
  • run a "one-time" x11vnc server
  • run the VNC viewer from the remote client machine - it will connect to the :0 X11 session
Steps:

  1. Install x11vnc with yum. I got a dependency error for xvfb, but had to install that manually as part of the xorg-x11-server-Xvfb package. I got the package from here:
    https://centos.pkgs.org/7/centos-x86_64/xorg-x11-server-Xvfb-1.17.2-22.el7.x86_64.rpm.html
  2. Try running as a "one time thing":
    x11vnc -noxdamage -display :0 -safer -nopw -once -xrandr
  3. Configure your firewall if necessary, add port 5900 TCP. Presumably you're behind some sort of corporate firewall to begin with. Note this could be a security issue otherwise.
  4. On your remote machine, run the VNC viewer, connecting to port 5900. You should get the original :0 X11 session.

QDirStat

The git repository project is here:
https://github.com/shundhammer/qdirstat

The following are prerequisites:

  • gcc
    gcc.x86_64 (should be installed)
  • qt5 runtime
    qt5-qtbase.x86_64
  • qt5 header files
    qt5-qtbase-devel.x86_64
  • zlib runtime and header
    zlib.x86_64 - (should be installed)
  • zlib devel
    zlib-devel.x86-64
  • git
    git.x86_64 - (should be installed)

Note: qt5 is  a programming framework

Hint: if you want to find a package by feature or file, try this:
yum whatprovides "*/qt5"

Hint: you can see where stuff gets installed this way:
rpm -ql qt5-qtbase.x86_64
rpm -ql qt5-qtbase-devel.x86_64

So there is stuff in:
/usr/bin, /usr/include, /usr/share, /usr/lib64

Installation - get source

cd /root
mkdir install
cd install
git clone http://github.com/shundhammer/qdirstat

Run qmake - will create a Makefile for gcc

qmake-qt5

Run make - build the binaries with gcc

make

Run install - copies files to a few places

make install

You can see  executable in /usr/bin, and gnome icon and config in /usr/share/applications and /usr/share/icons

Try out the program!

Check on Gnome under Applications / System Tools.

Updating to JDK 1.8

Make sure SDK is there, it may be only JRE. Look for installed java packages:

yum list installed | grep java
  or
rpm -qa | grep java

Check where the package is installed (if it is):

rpm -ql java-1.8.0-openjdk-devel.x86_64

If you need to install it, look for it in the repository:

yum search java-1.8.0

So, install it:

yum install java-1.8.0-openjdk-devel.x86_64

It'll install somewhere like:

/usr/lib/jvm/jdk_name

Keep in mind though that in this case there'll be a generic name to use:

/usr/lib/jvm/java

This is due to the Alternatives System. You'll see a link pointing to a link pointing back to a directory. You can check out the man page, or there's a good explanation
here. https://www.lifewire.com/alternatives-linux-command-4091710

As an example, one jdk looks like this:

/usr/lib/jvm/java (symbolic link)
 points to:
/etc/alternatives/java_sdk (symbolic link)
 points back to:
/usr/lib/jvm/java-1.8.0-openjdk-etcetc-x86_64 (real directory)

You can see what there are alternatives for with this:
alternatives --list

Or for a specific program, show detailed information:
alternatives --display java

Install SQLDeveloper

Download package, use yum to install (in this case, with my user account):
sudo yum install ./sqldeveloper_package_name.rpm

When done, just run it:
sqldeveloper

It will ask for location of Java. SQLDev is in /opt, but Java is something like:
/usr/lib/jvm/java (the "generic" java name/link)
or you can provide the actual directory directly if you're concerned about stability:
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.91-1.b14.el7_2.x86_64

Installing JDev

There are a few things to do first.

1. increase file handles for shell

As root, change the values in /etc/security/limits.conf
soft nofile 65536
hard nofile 65536

2. Download the JDev files. 

The 12.2.1 version had 2 files. Run the .bin file, accept most defaults.

3. Run JDev.

At this point, you can install CVS / SVN extension, and any others you need.

4. Create the Default Domain.

Go to Application Servers, create default domain. If you get a crash/XWin error when switching to "Application Servers" window, switch to Databases first.

Installing SmartCVS

There is CVS functionality built in to JDev, but from past experience I don't like to rely on it. I'll give SmartCVS a try. I got a bit fancy here, I wanted to capture the programs stdout/stderr instead of just sending to /dev/nul, so I could troubleshoot issues. If you don't care about that, then don't bother with steps 3 onward.

1. download package, will be a tar ball.

2. As root, unzip it (for system wide) to /opt:

cd /opt
gunzip smartcvs-generic-7_1_9.tar.gz
tar -xvf smartcvs-generic-7_1_9.tar

3. Create a script to run the file in the background, and change its permissions.

cd /opt/smartcvs-7_1_9/bin
touch runsmartcvs.sh
chmod 755 runsmartcvs.sh

Now, edit the file, something like the following:

echo Redirecting stdout to ~/.smartcvs/logs
/opt/smartcvs-7_1_9/bin/smartcvs.sh >~/.smartcvs/logs/smartcvs.log 2>~/.smartcvs/logs/smartcvs_err.log &

4. As the user, create directory for logs and change its permissions

cd ~/.smartcvs
mkdir logs
chmod 755 logs

5. As root again, create a link to the script we created

ln -s /opt/smartcvs-7_1_9/bin/runsmartcvs.sh /usr/bin/smartcvs

6. (optional) add to the Gnome menu - use Alacarte which should be installed already. 

  1. run alacarte from terminal
  2. select where you want the shortcut to go, click New Item
  3. give it a name, type the location (probably /usr/bin/smartcvs)
  4. you can click on the icon and select a .png file. Some programs like this come with a few.
type:

whereis smartcvs
and it'll tell you where it thinks it is. The install folder should have .png icons, so browse to it and pick one. In my case:
/opt/smartcvs-7_1_9/bin

OPTIONAL

To get even fancier, we could setup log rotation. We'll likely forget about this, and it may grow over time. Note the 2 logs we are rotating both end in .log, that was no mistake. As root, go to /etc/logrotate.d, and create a new config file for smartcvs:

cd /etc/logrotate.d
touch smartcvs

Edit the file to have something like this:

/home/*/.smartcvs/logs/smartcvs*.log {
  rotate 2
  missingok
  notifempty
  compress
  size 5M
  daily
  copytruncate
}

Note: the copytruncate is important; it will copy the log, truncate it, and allow the process to continue writing to it.

As root, test with debug force mode, that means it won't actually do anything

logrotate -df /etc/logrotate.d/smartcvs

Check output for error.

Configuring Mail Relay

I'm going to assume that you won't actually setup your own mail server, but simply configure a relay to your Corporate server. Essentially so you can send email to yourself or others in your company. If you want to send mail outside, you'll have to additionally configure authentication as anonymous will be prevented.

There are two main mail systems you'll likely encounter - sendmail and postfix. There are entire books written on just sendmail alone - another Linux iceberg - so I won't even attempt to explain anything about it. With a default OEL install, postfix is the one configured which is a bit easier to configure.

Before we start
  • you can check /etc/hosts, probably nothing but 127.0.0.1 in there
  • shouldn't need to add mailserver, your Corporate dns will likely be setup to resolve your mailserver to something like mail.yourcompany.com
  • check /etc/resolv.conf, should have your domain and one or more dns servers in there from the original OEL install:
search company.com
nameserver 192.168.1.2
nameserver 192.168.1.3

Postfix

1. configure relay to point to your corporate mail server

postconf -e 'relayhost = mail.company.com'

2. reload configuration

postfix reload

3. Test with Mutt or similar

First, install it (check if it is installed)

yum list installed | grep mutt
yum install mutt

Try sending email to yourself

mutt -s "test email" me@company.com <<EOM
This is my test email!
EOM

If it doesn't work, see what happened by checking the mail logs. It is likely your network admin will have to allow the email relay to your server.

less /var/log/maillog

Misc GNOME stuff

Gnome Extension

A lot of cool extensions can be installed via the Gnome extensions page, a website:
https://extensions.gnome.org

There are two parts:

  • the browser extension
  • the native host connector

Firefox didn't seem to have an issue for me, but for Chrome, I got an error for the native host connector. This is due to Unfortunately, you have to jump a few hoops to get it to work. The procedure is similar for Chrome and Firefox. Start with installing the extension/plugin.
Gnome Shell Extension Error in Chrome -
missing native host connector

Browser Extension
For Firefox, from the pancake menu / add-ons, search for Gnome Shell Integration.
For Chrome, go to the Chrome store and search for Gnome Shell Integration.
Gnome Shell Extension for Chrome

Native Host Connector
There may not be a package available, in which case you have to build it manually. Start with this page:
https://wiki.gnome.org/Projects/GnomeShellIntegrationForChrome/Installation#Cmake_installation

I had some of the prerequisites already installed, but I was missing the following:

  • cmake
  • python-requests
  • DBus
After installing these, it is the usual steps - see the web page above as the details may change:
  1. get the source, in this case using git
    note: I had to use the https alternative:
    git clone https://git.gnome.org/browse/chrome-gnome-shell
  2. make (cmake for this)
  3. install
Once that is all done, you just click on the little footprint icon in the top left of a Chrome tab and the error should be no more.

Other Misc stuff

Install alacarte (menu editor) -> check under Sundry / Main Menu

yum install alacarte

Install dconf-editor (gui version of gsettings):

yum install dconf-editor.x86_64


For Firefox, the gnome shell plugin:

yum install gnome-shell-browser-plugin.x86_64

Then, go to https://extensions.gnome.org/extension/208/panel-settings and click "on" to install

Via firefox, install Dash to Dock. Or, you can try the cairo dock:
yum list available | grep cairo
In my case I needed cairo.x86_64.

Via firefox, install User Themes extension, make sure it is "turned on", and restart Tweak

ssh key generation and logins

I wanted to ssh from my Windows machine to the new Linux desktop. This general method will work from any source to destination if they both have openssh installed. In this case, I ran this in Cygwin in Windows:

ssh-keygen -t dsa -f ~/.ssh/id_dsa

Copy the public key to the target machine (creates authorized_keys file):

ssh-copy-id username@192.168.5.5

Note: make sure the ~/.ssh folder on the target is chmod 700

FileZilla

Install the program:

yum install filezilla

It should add an icon to Gnome. If ssh agent is running, there should be an SSH_AUTH_SOCK env variable in place, then you can run FileZilla, create a profile, and choose 'interactive' - it'll use your ssh password.

Postman

Download the program from here:

https://www.getpostman.com/

Then unzip to somewhere like /opt. Use root or sudo to install:

tar -xzf postman.tar.gz -C /opt
ln -s /opt/Postman/Postman /usr/bin/postman

Where to install things

A good reference:  http://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/

Some installable places:

/home - where the users' homes are located

/usr/local - duplicate of /usr, but for locally compiled packages i.e. it has a make file.

/home/user/opt - for unstable things, like development releases of something e.g. Firefox

/opt - precompiled binaries/self-sustained programs for system wide stuff, usually packaged up in a tar ball. e.g. SmartCVS. Note: this means the software here does NOT split of their files into bin, lib, share, etc.

~/.local - per-user counterpart of /usr/local e.g. Gnome desktop configuration

~/.local/opt - per-user version of /opt


Other places:

/usr - system-wide, read-only files. There is stuff here like X11, telnet & ftp, java, sendmail, yum, perl, samba.

/lib - kernal modules and shared libraries needed for the system

/etc - configuration files. No binaries go here, usually static stuff.

/etc/alternatives - symbolic links for alternatives to generic names e.g. java

/sbin - system binaries, usually root only

/mnt - temporary mount point e.g. usb stick. New distros may use /media

/var - variable data, like logging files, spool directories

File Cleanup

This is handled by a daemon: systemd-tmpfiles

Try:

systemctl | grep tmp

One of these "units", the .timer one, cleans up lots configured files across the system. This will tell us where the config file is and other status information:

systemctl status systemd-tmpfiles-clean.timer

From there, we can look at its particular config file:

cat /usr/lib/systemd/system/systemd-tmpfiles-clean.timer

At the bottom is this, indicating it runs 15 minutes after system boot, and every 1 day after that.

[Timer]
OnBootSec=15min
OnUnitActiveSec=1d

The following hierarchy is used for file/directory cleaning. Note they are in priority order:

/etc/tmpfiles.d/*.conf  (mostly where you'll put your own custom cleanup)
/run/tmpfiles.d/*.conf  (run-time system stuff)
/usr/lib/tmpfiles.d/*.conf (vendor provided stuff, although /tmp is in there)

The config files can separately control what is deleted and what is not on a specific schedule.

From /usr/lib/tmpfiles.d/tmp.conf:

v /tmp 1777 root root 10d
v /var/tmp 1777 root root 30d

Random Other Interesting Stuff

You can check failed logins here (as root):

/var/log/secure

Thursday, April 6, 2017

CVS tidbits

Environment / assumptions for this post:
  • CVS
  • shell access to server
Yes I know - CVS or Concurrent Versions System is as old as time. Well, 1990 anyway. But sometimes you just don't have a lot of choice with the tools you have to work with. If it is working perfectly fine, then changing for the sake of changing turns out to be a low priority.

In any case, here are a few tidbits I've needed over the years. I may not need them that much, but once in a while I come back to them since I forget from one usage to the next.

So in no particular order, here are some CVS Tidbits.

CVS Vendor Tags

  1. First, import the 3rd party code with a vendor and release tags. In this case, repo/foo is the repository location, FOO is the vendor tag, and FOO_1_0 is the release tag.
     cvs import -m "Import of Foo 1.0" repo/foo FOO FOO_1_0
     
  2. Second, checkout the project, make modifications as usual, and commit
     
  3. Third, when a new version of the 3rd party software comes out, import it like above, but with a different release tag (same Vendor tag). It will report conflicts.:
     cvs -q import -m "Import of Foo 2.0" repo/foo FOO FOO_2_0
     
  4. Fourth, do a merge (under checkout command) between the two release tags:
     cvs checkout -j FOO_1_0-j FOO_2_0 repo/foo
     
  5. Fifth, resolve the conflicts and commit
     
  6. Sixth, do a fresh checkout.

CVS Restore from Attic

This is an administrator function, you will need command-line access to the repository. To restore a file from the attic, first discover what the last version was before it got deleted.  Navigate to the Attic of the directory where the file was, and look at the top of the deleted file.
Note: Make sure not to select the "dead" revision, or else it will restore a dead file of 0 bytes.  Go back one more.

Next, do the following:

 cd ~/local/path/to/folder/of/deleted/file

 cvs -Q update -p -r 1.11 DeletedFileName.java > DeletedFileName.java

 cvs add DeletedFileName.java

 cvs ci -m "reviving DeletedFileName.java" DeletedFileName.java

CVS Move Tag

This happens when I tag a release, but forgot one tiny little change to a file. I really want the tag to apply to the new version of that one file. So for example, if I want to move a tag say from 1.8 to 1.9:

 cvs tag -F -r 1.9 PROD_3_2 MyFile.java

CVS loginfo settings

There are two nice integrations we use CVS with:
  • One with Atlassian's JIRA bug tracking software. This allows us make a connection between a particular bug, and all the files that were changed to fix that bug. This is done by matching the JIRA bug number with the CVS commit comments (which are added to a commit database).
  • Two, with ViewVC which lets you browse a repo via browser, and also compare versions, etc. See ViewVC website.
So, the following settings will enable these things by doing two things:
  1. log the commit to a commit database (e.g. viewvc uses mysql)
  2. force the user to enter a commit message conforming to a template e.g. to include a bug tracking number
Under CVSROOT in the loginfo file:
 ALL /usr/local/viewvc-1.0.7/bin/loginfo-handler %{sVv}

Under CVSROOT (or other location) in a file such as commit_template.txt. This is arbitrary, but gives the user a format to follow:
[PRJ-000]

Under CVSROOT in the verifymsg file:
 Proj1 /home/cvs/CVSROOT/verify_commit
 Proj2 /home/cvs/CVSROOT/verify_commit

Under CVSROOT (or other location) in a script such as verify_commit:
 #!/bin/sh
 #
 #
 #  Verify that the log message contains a valid Jira bugid
 #  e.g. ABC-203   or   XYZ-35
 if head -1 < $1 | grep '\[[A-Z]*\-[0-9]\]*'' > /dev/null; then
     exit 0
 else
     echo "No Jira issue number found: expecting [PRJ-000] in commit message e.g.. ABC-203."
     exit 1
 fi

CVS Import

I always forget the right way to import into CVS, especially when using WinCVS. I'm apparently not the only one, as I see MANY files imported at the root repository level that clearly shouldn't be there. So use a good GUI tool, or use the command-line I guess.

Using WinCVS

This will probably benefit exactly no one, you'd be hard pressed to find anybody using CVS anymore let alone WinCVS. Never-the-less, here is the method:
  • Navigate to one level above the directory, so you see it listed on the left side of the screen in WinCvs (do a Browse Location to the parent dir)
  • Right-click the directory (in the left side), select import
  • Specify file types for any unknown files (text, binary, etc)
  • For Repository Path, if you are importing a directory css,  and you want it under a root level project called FooBar, then enter FooBar/css (no leading slash)
  • Specify a log message such as 'Initial Release' or something else
  • Click 'OK'
Now you can do a fresh checkout operation into your project directory.

Using Command-Line

Let's say you have a directory css e.g. /tmp/css. Setup the env variable however you need to:

export CVSROOT /home/cvs
cd /tmp/css
cvs import -m "initial import" FooBar/css vendor start


CVS Bulk Delete

You remove files, not directories from CVS.  To remove directories, delete the files and then update with the "prune directories" option checked. In any case, to mass delete a directory and its contents:

First, open a shell (cmd window for Windows machines), and cd into the directory:
cd /directory/to/be/removed

Now, schedule for deletion all the files in question.
cvs remove -f *

Commit the changes:
cvs ci -m "my log message"

Update, pruning empty directories:
cvs update -P

This procedure will move everything to the "Attic" i.e. it will still be visible in the repository on the server.  It will still be retrievable by tag, version number, etc.  If it was a mistake and you want to permanently banish it, then you can go follow the above operation, then log in to the server and just '''rm -rf''' the directory.

CVS Change Log Message

To change a commit comment:

  • must do this from a checked-out copy, not directly in the repository
  • must cd to the directory, cannot specify absolute path
  • must specify a specific version and file

cd /my/path
cvs admin -m 1.8:"My new commit comment" MyJavaFile.java

CVS adding a password for a user

In $CVSHOME/CVSROOT/passwd, the entries look like:

 userid:pass:userToRunAsIfAuthenticationSucceeds

e.g. jjames:$uiu&Q32lN9:cvs

The password is encrypted using unix crypt. Get encrypted password by using the script cvspasswd, which if you don't have it is essentially this:

 #!/usr/bin/perl
 $user = shift @ARGV || die "cvspasswd user\n";
 print "Enter password for $user: ";
 system "stty -echo";
 chomp ($plain = <>);
 system "stty echo";
 print "\n";
 @chars = ('A'..'Z', 'a'..'z', '0'..'9');
 $salt = $chars[rand(@chars)] . $chars[rand(@chars)];
 $passwd = crypt($plain, $salt);
 print "$user:$passwd\n";

Tuesday, March 14, 2017

Using JVisualVM to connect to a remote Weblogic 12c Managed Server

Environment / assumptions for this post:

  • Weblogic 12c Managed Server(s)
  • jdk7 / 8
  • Oracle Enterprise Linux 7.2
  • Cygwin on Windows 7

Where I work we have many servers running various versions of Weblogic (mostly 12c but still some 11g) and Hotspot VM with jdk7 and jdk8, all on OEL (Oracle Enterprise Linux). It can be really useful to monitor threads or memory, browse the MBeans, etc.  Using JConsole used to be the way to go, but JVisualVM monitoring tool has been available since the later releases of jdk6 and is more flexible.  It doesn't do MBeans from the get-go, but has a plug-in system that makes it very easy to "turn on" right away along with some other useful plugs. Once you get the tool running, you simply "install" the plugin with a few clicks - it ends up in the user's home folder under ~/.visualvm.

List of JVisualVM plugins available with just a few clicks

There are several ways to run the tool and connect to a remote JVM, some of which are quite complicated - you may have to worry about setting JVM system properties (which may not be possible!), then there is authentication, SSL and keystore issues, firewall issues, and more.  See this Oracle documentation for some additional information.

Another way is to connect locally to the PID of the JVM. This means either logging in to the server's GUI if possible, or running your own X server and simply displaying the remote JVisualVM on your local machine. This last way I've found to be fairly easy to setup (the only thing is it assumes you have shell access to the remote server), and gives many advantages:

  • do not have to restart the JVM with the system properties required;
  • less issues around SSL and security;
  • can easily pick any JVM on a server by selecting its PID

To set this up, see Option 2 in another of my posts. Once this is done, you can install some plugins and get monitoring!

Some of the plugins available:

MBean plugin

JVisualVM MBean plugin


Visual GC plugin

JVisualVM - Visual GC plugin


Threads Inspector plugin

JVisualVM Threads Inspector plugin



Thursday, February 9, 2017

Disappearing Application Modules in Data Control panel after migrating an ADF project to JDeveloper 12c

Environment / assumptions in this post:

  • JDeveloper 12.2.1 on Windows 7
  • Custom ADF Fusion Web Application

I had this issue with one project in our custom ADF Fusion Web application that needed to be migrated from 11.1.1.6 to 12.2.1. While checking through the many JDev compile errors and warnings after migration, for some reason the Application Modules completely disappeared from the data controls tab. Unfortunately the little refresh button didn't majically fix it.

After some investigation, there were two things that were helpful to resolve:

Issue 1 - Contains Data Controls setting

There is a setting in Project Properties, it is a checkbox called "Contains Data Controls". This is normally checked, but it can be unchecked for performance reasons (according to the documentation).  It is found under, for example, Model / Properties / ADF Model -> check the "Contains Data Controls". This will ensure new AMs will show up in the Data Control section.


Issue 2 - regenerating AM related files

For existing DCs, some of the generated files may have to be forced to regenerate. Make a trivial change to the Application Module such as adding then immediately removing a VO, OR removing and adding a method from the Client Interface. The AM.xml will be regenerated, adding some attributes here and there (notably to the Client Interface of any VOs).


That's what did it for me! Good luck.