Powered By Blogger

Tuesday, March 29, 2016

Real-World Docker Series: Introduction to Kubernetes and Container Orchestration

What is Kubernetes?

  • Offers container orchestration
  • Clustered Service/Platform
  • Platform for running containers (micro-services)
  • Offers application horizontal scaling
  • Automation
Some products that run on Kubernetes include:
  1. Openshift Enterprise
  2. Atomic Enterprise Platform

Kubernetes Cluster Components:
Kubernetes is comprised of several components that make up the overall service.  We'll run through them so that you can understand what each service provides to the clustered infrastructure.
  1. Flannel (flanneld)
    • Overlay network
    • Provides private isolated backend container network
    • Runs on each kubernetes node and anywhere that you run the kube-proxy service
  2. etcd
    • Distributed key-value store
    • Manages flannels network configuration across container hosts/kubernetes cluster nodes and other information about the cluster
    • Typically runs on kubernetes master
  3. kube-apiserver
    • Manages all data for kubernetes api objects (pods, services, replication controllers, authentication, etc)
    • Provides REST operations and interactions between all other kubernetes components
    • Runs on kubernetes master
  4. kube-controller-manager
    • Handles replication processes and interacts with etcd to ensure desired state
    • Involved in scaling of applications
    • Runs on kubernetes master
  5. kube-scheduler
    • Distributes workloads to cluster nodes
    • Runs on kubernetes master
  6. kube-proxy
    • Makes applications/services available outside of cluster
    • Provides service network where container workloads are accessible from
    • Forwards external requests to appropriate containers in the backend
    • Acts similarly to a  software/virtual switch
    • Typically runs on kubernetes master or a dedicated “jump” proxy server
      • Routes from a switch to a node running kube-proxy are required to access the service network.  The kube-proxy acts as a gateway in this instance.
    • Runs on all kubernetes nodes to ensure communication between service network and backend network
  7. docker
    • Container engine
    • Runs on each kubernetes node
  8. kubelet
    • Communicates with the master server
    • Receives workload manifests from the master and maintains state of the node's workloads
    • Runs on each kubernetes node
Kubernetes Application Components:
When deploying an application on top of kubernetes, there are several bits and pieces to put together like building blocks.  Your containers should be made up of a single service/micro-service that provides a piece of functionality to the overall application.  Historically, an application would be deployed onto a server where many services were running simultaneously.  In kubernetes, you should build your applications in a modular fashion where each process/service is a container.  

For example, a web server would run maybe httpd, a small database service (mysql/mariadb/percona/postgresql), and maybe some other small scripts to run backups and do daily tasks (cron.)  In the container world, we would be running three containers to accomplish the above example.  We'd have a httpd container, a database container, and a cron container.  The httpd and cron containers would probably be attached to the same storage to be able to manipulate data from the httpd container as required, and the cron container may even have a database client to be able to connect and manipulate the database container's data.

Another thing to consider is how your application can be scaled.  Typically the easiest way to do this is to scale out the application's front-end to be able to handle large loads of traffic.  You can also build a distributed database by taking advantage of replication and scale the database containers horizontally.

The application's containers are one aspect of kubernetes application design, but there's also the actual kubernetes components that put the whole application together and allow it to function.  

Here are the most common components used and a brief explanation of what they do:

  1. Pod
    • Typically contains a single running container, but can run more than one container to build applications
    • Gets assigned a private (non-routable) IP address for communication within the kubernetes cluster
    • Wraps around a container
    • Cannot be scaled
  2. Replication Controller
    • Defines a pod spec
    • Runs X number of exactly the same pods (i.e. replicas)
    • Ensures X number of pods are always running in the cluster based on what you define
    • Used in horizontally scaling your applications
    • A pod defined by a replication controller will respawn if killed
    • Good idea to always use replication controllers, even if you only want a single pod running due to the fact that if that pod dies, it will respawn it.
  3. Service
    • Exposes a pod or multiple pods from a replication controller to the kubernetes service network effectively exposing the service to the outside world on a single unique IP address and specific port.
    • This is how consumers of your application access the application
    • Uses a label from within the replication controller's spec (selector) to dynamically keep track of all the pods (endpoints) available to serve
    • Effectively acts as a load balancer
  4. Volumes/Persistent Storage
    • Like we've talked about in the past with our posts on docker, containers need a way to persist data.  Persistent volumes can be of many types, such as, NFS, git repository, kubernetes node local storage, gluster, AWS, etc
    • There's three parts to how the volumes get used:
      1. Persistent volume is defined
      2. A Persistent volume claim is defined which binds to the persistent volume
      3. A replication controller/pod defines its mountpoints and volumes by calling the persistent volume claim and tying everything together
    • Many pods can attach to the same backing persistent volumes
    • If running selinux, you can append the :z and :Z options on your replication controller's/pod's mountpoint specs to work around access issues until some selinux issues are worked out
  5. Namespaces
    • An area to run kubernetes components
    • Good way to decouple departments/projects running work-loads simultaneously on a kubernetes cluster
    • Can be used to limit/allocate resources such as CPU, network bandwidth, disk IO, etc
    • Helps limit impact on the overall available kubernetes node resources by restricting utilization with careful planning by a cluster administrator
    • Can also limit what a cluster user can see and access by making use of role-based access controls
I'll leave you with this pretty topology map from my kubernetes lab.  You can see many of the components we talked about in this post:



Wednesday, November 4, 2015

Run a RHEL7 docker registry in a container

I've been doing some testing and research surrounding the RHEL7 docker-registry service, a private Docker registry server, and decided that it would be a good experiment to run it as a container.

To accomplish this, I had to determine how the docker-registry package and service run, and how they are configured.

So on my RHEL7 host I started by installing the docker-registry package from the extras repo:
yum install -y docker-registry

I took a look inside the systemd service unit for docker-registry, which was located at /usr/lib/systemd/system/docker-registry.service, since all the requirements for starting the process would be held here.

Contents of /usr/lib/systemd/system/docker-registry.service:
[Unit]
Description=Registry server for Docker

[Service]
Type=simple
Environment=DOCKER_REGISTRY_CONFIG=/etc/docker-registry.yml
EnvironmentFile=-/etc/sysconfig/docker-registry
WorkingDirectory=/usr/lib/python2.7/site-packages/docker-registry
ExecStart=/usr/bin/gunicorn --access-logfile - --max-requests 100 --graceful-timeout 3600 -t 3600 -k gevent -b ${REGISTRY_ADDRESS}:${REGISTRY_PORT} -w $GUNICORN_WORKERS docker_registry.wsgi:application
Restart=on-failure

[Install]
WantedBy=multi-user.target

As you can see, there's a yaml config file and an environment file (overriding some settings from the yaml.)
There's a working directory, and a startup command.

So once I realized the /etc/sysconfig/docker-registry environment file was overriding settings in the /etc/docker-registry.yml, it was easy to move this into a Dockerfile to build out a container.

Contents of /etc/sysconfig/docker-registry:
# The Docker registry configuration file
# DOCKER_REGISTRY_CONFIG=/etc/docker-registry.yml

# The configuration to use from DOCKER_REGISTRY_CONFIG file
SETTINGS_FLAVOR=local

# Address to bind the registry to
REGISTRY_ADDRESS=0.0.0.0

# Port to bind the registry to
REGISTRY_PORT=5000

# Number of workers to handle the connections
GUNICORN_WORKERS=8

As you'll see, I move the envrionment file settings into docker ENV lines.  I am also overriding where the docker-registry stores it's data, and making the docker registry searchable.  When I run the container I'll be bind mounting a path from my docker host to the docker-registry container.  When the container is shut off, my data will still be there, and it will also come back when I restart my docker-registry container.

I'm not running yum update -y, because there's been some failing dependencies on systemd-libs packages in the latest published containers.

Contents of mydocker-registry Dockerfile:
FROM rhel7:latest
RUN yum --enablerepo=rhel-7-server-extras-rpms install docker-registry -y && \
    yum clean all;
EXPOSE 5000
ENV DOCKER_REGISTRY_CONFIG /etc/docker-registry.yml
ENV SETTINGS_FLAVOR local
ENV REGISTRY_ADDRESS 0.0.0.0
ENV REGISTRY_PORT 5000
ENV GUNICORN_WORKERS 8
ENV SEARCH_BACKEND sqlalchemy
ENV STORAGE_PATH /mnt/registry
WORKDIR /usr/lib/python2.7/site-packages/docker-registry
CMD /usr/bin/gunicorn --access-logfile - --max-requests 100 --graceful-timeout 3600 -t 3600 -k gevent -b ${REGISTRY_ADDRESS}:${REGISTRY_PORT} -w $GUNICORN_WORKERS docker_registry.wsgi:application

Build it:
docker build -t mydocker-registry .

Run it:
docker run --name docker-registry --rm -v /mypath/docker-registry-storage:/mnt/registry:Z -p 5000:5000 mydocker-registry

Now you can tag and push to it.

Tag one of your other images:
Run: 'docker images'
Note one of the id's.

docker tag <container id> localhost:5000/mynewtag

Push it:
docker push localhost:5000/mynewtag

Search for it:
docker search localhost:5000/

This should return the image you just pushed into the registry.


This is an insecure docker-registry setup, and authentication is not configured.  To connect to this docker registry from another docker host to pull or push you must make a change on that system.

To push/pull/search from another docker host:
Edit /etc/sysconfig/docker
Uncomment the line:
INSECURE_REGISTRY='--insecure-registry'

Modify the line to contain:
INSECURE_REGISTRY='--insecure-registry <the ip/hostname where your docker-registry container runs>:5000'

Save it and run: 'systemctl restart docker'

Now you can do a 'docker pull <ip/hostname where your docker-registry container runs>:5000/mynewtag'

You can also take the container one step further and configure it as a systemd service unit.

Monday, November 2, 2015

Real-world Docker Series: Conclusion

I hope you've enjoyed all the shared knowledge on getting real-world docker hosts configured and understanding some of the considerations with systems administration and docker.  This series was intended to be a quick reference for some of the deeper concerns with bringing docker containers to a production environment, and a collection of knowledge that was at times hard to put together.

What I've laid out in this docker series would work in production for a small set of docker hosts and containers residing on each host.  Think of each docker host as VMWare ESX without vCenter.

If you need something more robust with orchestration, clustering, load-balancing, and scaling features, then you will need to start looking at kubernetes.  Kubernetes is a real-world answer to all of these problems.  Red Hat takes it even further with products like OpenShift v3 and upcoming Atomic Enterprise Platform.

Real-world Docker Series: Using selinux with docker

Docker can safely and easily run in conjunction with selinux.  To ensure you're setup for selinux support, check the following:
'getenforce' - Are you enforcing/permissive/disabled?  You should be enforcing.  if not, run 'setenforce 1'.
'yum list installed docker-selinux*' - If nothing returns, then you're missing the selinux components and need to install them.  Run 'yum install -y docker-selinux' to resolve.
'cat /etc/sysconfig/docker | grep OPTIONS' - You should see 'OPTIONS='--selinux-enabled''.  If not make the change and restart the docker daemon: 'systemctl restart docker'.

Docker works auto-magically with selinux to enhance your system's security.  The only things you need to do to properly work with the tool are to understand the switches involved with bind mounting storage.  Visit the Bind Mounting Storage & Ports and Working with NFS mounts articles if you haven't already to understand the caveats with selinux and docker storage.

Next: Conclusion

Real-world Docker Series: Run Dedicated Containers as systemd services

Since running docker run commands to start up many containers when a docker host comes online would become impractical and extremely tedious, we can move these commands into systemd service unit files.

To create your own systemd service unit files, you must store the files in /etc/systemd/system.
It's a good practice to name them by prefixing the service name with docker to easily identify what they are later.
Ex: /etc/systemd/system/docker.<containername>.service

Here is an example of a systemd service for a container:
[Unit]
Description=Nginx Container 1
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker stop Cont1
ExecStartPre=-/usr/bin/docker rm Cont1
ExecStart=/usr/bin/docker run --name Cont1 --rm -v /root/cont1:/usr/share/nginx/html:Z -p 80:80 nginx

[Install]
WantedBy=multi-user.target

If you're unfamiliar with systemd services, there's a few things to notice here.
After: This makes sure the service is started after the docker service.
Requires: This will not run if there is no docker service.
TimeoutStartSec: 0 implies we will not timeout when trying to start this service
Restart: Start the service's process (ExecStart) if it ends
ExecStartPre: You can have many “pre” commands. Notice the '-' at the beginning of the command. This tells the service that if this command fails, move on and do not fail at starting the service. This is important mainly to the restart aspect of this service. The first “pre” command ensures that the container named Cont1 is stopped. The second command ensures it's removed any possible orphaned containers named Cont1.
ExecStart: This is where your docker run command goes, and is the main process that systemd will be watching.
[Install] and WantedBy are replacements for the old chkconfig runlevels. Multi-user.target is the equivalent of run level 3. When we run systemctl enable docker.cont1.service, it will be installed (enabled) at run-level 3.

Caveats:
docker stop <container name> will stop the container temporarily, but systemd will automatically restart the container based on the service we created. Test by running docker stop <container name> then run docker ps. Notice the time the container has been up.

To truly stop the running container, do so by stopping the associated service you created for it.

 systemctl stop <docker.MyContainer'sService>


This is where you may want to introduce some of the power of cgroups (control groups.)  Cgroups allow the administrator to control how much of the host's resources are taken up by child processes.  You would want to add any restrictions for the container's in its service.  This is outside the scope of this series, but you should certainly check out Red Hat's documentation on the topic of cgroups as it pertains to RHEL7, as it has changed significantly with the implementation of systemd in RHEL7.

Next: How to use selinux with docker

Real-world Docker Series: NFS Tree/Hierarchy considerations

When building out the directory structure/tree design, it is important to do this in a sensible way.
Something like:

NFS1
|
|_HOST1
| |_Container1
| | |_webapp
| | |_logs
| |_Container2
| | |_webapp
| | |_logs
|_HOST2
| |_Container3
| | |_webapp
| | |_logs
| |_Container4
| | |_webapp
| | |_logs
|_HOST3
| |_Container5
| | |_webapp
| | |_logs
| |_Container6
| | |_webapp
| | |_logs
|_HOST4
| |_Container7
| | |_webapp
| | |_logs
| |_Container8
| | |_webapp
| | |_logs
|_SharedContent



In the tree design of this example, we're assuming each container will have a different code base for each container's running web application.  If they were all running the same application, you could for example, omit the webapp directory for each container, and mount up the same application with content placed in the SharedContent directory off the root of the NFS mount to the necessary web root of the container. 

Think this through when building out your environment.  Systems that perform orchestration are much better suited for what we're showing here, and you should definitely read about kubernetes.

Real-world Docker Series: Working with NFS Mounts

After seeing how to bind mount storage, you're probably wondering, “How can I store data from a container on a NFS mounted device?”

There are 2 ways to accomplish this properly with selinux:
1.) There is a selinux boolean: virt_sandbox_use_nfs
To check the status of this boolean, you can run:
getsebool virt_sandbox_use_nfs
If the status of the boolean is off, then you can turn it on by running:
setsebool -PV virt_sandbox_use_nfs on #Persistent and Verbose on Errors
Now run getsebool virt_sandbox_use_nfs again to verify it's now on.

When bind mounting storage on the NFS mount, you will now need to drop the :z and :Z options.

This now allows the containers to be able to access any of the docker host's mounted NFS volumes when directed to.
2.) Setting the appropriate selinux file context as a mount option. This is accomplished by adding the selinux context required for docker container data to the /etc/fstab NFS mount options.
vi /etc/fstab and find the appropriate NFS mount. Append to the entry's options: context=”system_u:object_r:svirt_sandbox_file_t” and save the fstab.
Unless you are running a NFS v4.2 server and NFS v4.2 client you will need to drop the :z and :Z options from your docker run command. NFS v4.2 supports contexts properly and can properly store the file contexts.

Method 2 is considered more secure, since you are allowing possible access to only a specified NFS volume rather than all of them as seen in method 1.



Real-world Docker Series: Bind Mounting Persistent Storage & Ports

Bind Mounting Storage:
Docker containers do not retain data once they've stopped running. If you want to keep any data generated by a container, you must bind storage from the docker host to the docker container.

To accomplish this, all you need is a valid path on the docker host. When running the container, you will specify where to bind this path.  If you are already familiar with virtualization technology, think of this as assigning a new virtual disk to a virtual machine, and then mounting it on a particular path.  With bind mounting, we're simply mounting a path on the host to a path on the container.

When using selinux, you must specify a :z option for a path shared by many containers (ex: web application's www root.) If the data is specific to a single container, (think log files in /var) you will use the :Z option. You can have multiple volume bind mounts.

Run:
docker run --name Cont1 --rm -v /root/cont1:/usr/share/nginx/html:Z nginx
Command Breakdown:
–name Cont1 is the unique run-time name we've assigned to (in this example) the nginx container (tag)
--rm Remove the container (after it's work is done i.e. process it performs ends)
-v Volume, /root/cont1 (path on docker host) /usr/share/nginx/html (where to mount on the container)

From this example you see that we bind mounted /root/cont1 to the Cont1 container with tag nginx.

The --volumes-from=<container id> (obtain container id from 'docker ps')  This will bind mount all the mounted volumes from a running container to another container.  You can specify :ro (read-only) or :rw (read-write) to override the container's mounting settings.  The default is to inherit the settings of the volumes from the container you're bind mounting from.

Bind Mounting Ports:
Docker containers ports are not automatically exposed.  Furthermore, docker containers run on a private docker network bridge.  Furthermore, based on the configuration of the docker container's process, you will need to map a docker host port to the container's port where the process is running.  On top of that, docker will automatically create iptables rules for you, based upon your port binding command.

Port binding is completed with the -p switch.  See your container's documentation to configure a particular service port.  In this example we're using the nginx container from hub.docker.com, and the default port on the container is 80.

We'll add on to what we did in the storage bind mounting section.
Run: docker run --name Cont1 --rm -v /root/cont1:/usr/share/nginx/html:Z -p 80:80 nginx
Command Breakdown:
Notice the added on switch -p:
-p Port bind mounting. The first 80 is the docker host port (where the service will be accessible) the second 80 is the container's available port.  We're mapping port 80 from the container to the host to open the service up from outside the docker host.

Now to check that you can access what the container is serving, visit your docker host's IP on port 80.
ex: 192.168.122.5 in a web browser will work just fine.  If foloowing this example, make sure you have some content (index.html) in /root/cont1 on your docker host.

Thursday, October 29, 2015

Real-world Docker Series: Installing Docker

Since this is intended to be a real-world Docker series, we're focusing on Red Hat Enterprise Linux 7.  CentOS7 will be very similar, but make sure you understand the differences between the OSes.

The first thing to do is to make sure you have the extras repository enabled.
yum repolist enabled|grep server-extras
!rhel-7-server-extras-rpms/x86_64                    Red Hat Enterprise L   112

If you don't see it listed, run:
subscription-manager repos --enable=rhel-7-server-extras-rpms

Run the first command again, to ensure the repository is now enabled.


Installing Docker:
sudo yum install docker -y

This will install docker and its dependencies, which should include the following:
docker.x86_64                    1.7.1-115.el7           @rhel-7-server-extras-rpms
docker-selinux.x86_64            1.7.1-115.el7           @rhel-7-server-extras-rpms
docker-logrotate.x86_64          1.7.1-115.el7           rhel-7-server-extras-rpms
docker-python.x86_64             1.4.0-115.el7           rhel-7-server-extras-rpms
docker-registry.noarch           0.6.8-8.el7             rhel-7-server-extras-rpms
docker-registry.x86_64           0.9.1-7.el7             rhel-7-server-extras-rpms

Next: Configuring Docker Storage

Real-world Docker Series: Tagging Images

Once a container has been loaded (or pulled) you can tag it to make it more useful to your project. A tag is very similar to a repository tag when working with a version control system. You'll want a useful tag name to be able to manage your containers easily later.

Run 'docker images'
Notice the container id listed. We'll need that to tag the container.

Run 'docker tag <image id> <your tag name>'
Verify your tag by running 'docker images' again.  It will now appear with the tag you provided.

A single docker container image can be used by many running containers. This is done by making use of the –name <desired running name> flag during a 'docker run'.  We'll go over this in detail later.

Next: Bind Mounting Persistent Storage & Ports

Real-world Docker Series: Loading Pre-Built Container Images

When provided with a pre-built container (probably from a developer) outside of a docker registry, you can use the docker load command to import the container.

Run:

 docker load -i <container package>.tar

Verify the container image was loaded:
docker images

You will see a container listed with an id only.

You can also check out hub.docker.com to 'docker pull' pre-built images.
After pulling a docker image, you will also see them listed with 'docker images'.

Next: Tagging Container Images

Real-world Docker Series: Intro

If you found your way to my little blog, then you have probably already heard of Docker, the container engine.  I've been doing a great deal of work surrounding Docker in my profession, and this is all so new, it's hard to get your finger on the pulse of where to get started, and where to go next.  This is an attempt to take you from start to finish on setting up a real-world Docker environment.

With this series of posts, I plan to focus on the administration side of things, mainly focusing on configuration and best-practices surrounding Docker.  We'll cover more than the basics found at the Docker Getting Started Page, and provide real-world examples of using containers.  For now we're focusing on just Docker, and as time passes I'll put together some posts on using Kubernetes to control and scale clustered container environments.

Next: Installing Docker

Real-world Docker Series: Configuring Docker Storage

Once the docker package and dependencies have been installed, you will want to configure storage for your containers.

Docker storage is intended to store your container's images.  When you run 'docker pull <container name>' docker will store the data in this space.  Containers are typically very small, and usually about 300-600MB.

Docker, by default, utilizes loopback storage devices. These create a virtual device at /var/lib/docker/devicemapper and use local storage. Due to performance degradation with loopback devices, the recommended method of container storage is to utilize the docker thin pool. Take a look at the contents of /etc/sysconfig/docker-storage.  This will change after configuring the docker-pool.

The docker thin pool can make use of any block device and create a thin logical volume, and can be configured to automatically grow when new space is added to its volume group (default.)  If you have worked with VMWare ESX you'll get the idea of how a thin volume works.

To configure the docker thin pool, we will use the docker-storage-setup file. First ensure you have a new block device (disk) added to the docker host. When added, get the device name by doing a 'fdisk -l' and identify the appropriate device.

DO NOT START THE DOCKER DAEMON YET

Replacing /dev/sdX with your block device found from 'fdisk -l':
sudo vi /etc/sysconfig/docker-storage-setup
Add the following:
DEVS=”/dev/sdX” #can be a comma separated list (replace /dev/sdX with the device identified with fdisk -l)
VG=vg_docker #volume group name that docker will generate for the docker-pool
Save the file.

The docker-storage-setup script will automatically generate the appropriate thin logical volumes and volume group when the docker daemon starts, or by running docker-storage-setup manually. ****NOTE: There's currently a bug with docker-storage-setup, and the block device must be 8GB or greater.

Start the docker daemon:
sudo systemctl start docker

If you'd like, you can now enable the docker service to start at boot:
sudo systemctl enable docker

Due to the docker-pool being a LVM thin volume, you will not see the volume when running 'df -h'.  To verify the volume has been configured:

Run 'lvs'
Verify there is a new Logical Volume with the name docker-pool.
Run 'vgdisplay'

Verify there is a new Volume Group with the name you specified in docker-storage-setup (vg_docker from above.)
Run 'docker info'
This will display detailed info on how the block device was broken up between metadata and data volumes, and show you the available data storage.  To learn more about thin Logical Volumes, run 'man lvmthin'.

Once you have confirmed that everything looks good, you should remove the /etc/sysconfig/docker-storage-setup file. If you don't, I've seen the docker service not start in some instances, and upon examining your logs, you will find that it is due to existing partitions on the specified block device from the /etc/sysconfig/docker-storage-setup file.


 Take a look at /etc/sysconfig/docker-storage. You will see that this was automatically configured to utilize the new docker pool.

Next: Loading Pre-built Docker Container Images

Thursday, March 22, 2012

Installing Sametime 8.5.2 from scratch & migrating

Please note this article is incomplete.  I should finish up the remainder of my work soon!  It's about 90% done though, and I still have many questions to answer.


Scope
This article aims to install a Sametime V8.5.2 Server with Audio Visual meeting components enabled on a CentOS 5/RHEL 5 Linux server.

Getting Started
If you would like a Sametime Server to have Media (Audio & Visual) capabilities you must install several components.

  • Sametime Community Server
  • DB2
  • Sametime System Console
  • Sametime Media Server
If you also want the new style meeting capability, which I haven't even seen yet and will eventually get to, you must also install Meeting Server. So there you have it, that's all you need to know...hah yah right.  This is an extremely complicated installation and very daunting.  You need to spend hours toiling over the IBM documentation and contacting IBM support to get this working, or you can just read this post and save yourself some headaches.

Tips, things to know, and mistakes I made along the way:
  • Check some of your other sametime servers' sametime.ini for the current VP_SECURITY_LEVEL setting and make the new one matches the others in your environment.  That will ensure you can communicate among community servers.
  • Make sure your server's security settings Internet Security Setting is set to: More name variations with lower security.  This enables the use of Domino short-names and full User Names (First and Last Name) to be used when logging into the server.
  • Replicate the vpuser.nsf from your current sametime server to the new one, in a new connection document until you plan on decommissioning the old server.  This will keep your user's Sametime Contact lists in sync.
  • I had problems launching the launchpad application, so I opened a support PMR and was informed I could use Installation Manager directly by running the install file inside of the IM directory:
    • su - (To login as root.  It will not allow you install if you are not root.)
    • #cd <extracted installation files>/<InstallationName>/IM/
    • #./install
  • *****You can not do Audio/Video between pre-8.5 versions of Sametime

Gather Installers:
So start off by downloading all the necessary parts and build up a clean RHEL server (or whatever you prefer from IBM's supported systems list.)  Also I'll state this now.  I'm not using RHEL, but actually CentOS and will probably be unsupported by IBM if I'm honest with them, but with that being said I was able to trick the installer.  I'll write a separate post on how to make a CentOS system appear to be a RHEL system later as well I suppose.

Domino 8.5.2 and FP3 for it.  (I'm not going 8.5.3 until there's an FP1)
Install your Domino server as normal and make sure you click "This is a sametime server - Yes" in it's server document.  

The "IBM Sametime Standard V8.5.2 Multiplatform Multilingual" eAssembly is part CRE9WML.
For Linux you will need:
  • CZYD8ML.tar  IBM Sametime Standard Community Server V8.5.2 AIX Linux Solaris Multilingual
  • IBM DB2 9.7 (eAssembly is part CRE9VML)
    • DB2_97_limited_CD_Linux_x86-64.tar.gz - Part CZ1HSEN   IBM DB2 9.7 - Limited Use for Linux® on AMD64 and Intel® EM64T systems (x64)​
  • CZYF4ML.tar   IBM Sametime Standard V8.5.2 System Console Server Linux on x86 Multilingual​
  • CZYF1ML.tar   IBM Sametime Standard V8.5.2 Media Manager Linux on x86 Multilingual​
  • CZYE4ML.tar   IBM Sametime Standard V8.5.2 Meeting Server Linux on x86 Multilingual​

Install DB2 (Start to finish ~1 hour)
DB2 is required by the System Console, and System Console is required by Media Manager, so you will definitely need this if you want to be able to do AV calls in your Sametime Environment.

Planning is important.  If you're running a large scale environment you would want to install each one of these server pieces on a separate machine.  In this scenario we are building a stand-alone sametime server.

Copy the DB2_97_limited_CD_Linux_x86-64.tar.gz up to your server.  
Unpack the files: 
#tar -xvf DB2_97_limited_CD_Linux_x86-64.tar.gz
#cd wser


I found it easier to create your DB2 group and user first rather than letting the installer do it.  It was giving strange errors on my standard password:
groupadd dasadm1
useradd dasusr1 -G dasadm1
passwd dasusr1
<enterapassword>

groupadd db2iadm1
useradd db2inst1 -G db2iadm1
passwd db2inst1
<enterapassword>

groupadd db2fadm1
useradd db2fenc1 -G db2fadm1
passwd db2fenc1
<enterapassword>


Run the graphical setup:
#./db2setup
Choose to install a Product.
Choose DB2 Workgroup Server Edition
Accept license aggreement.
Point the response file to a path such as /opt/ibm so you can replicate your install if needed.
Point it to your desired install path.  Mine is /opt/ibm/db2/V9.7_01
Point the installer to the users we created in the order we created them as it prompts on each screen.
Give the installer your mail relay server hostname.
Create a local contact.  I used  a distribution group in my case.  You just need to give it a name and an e-mail address.
Click Finish.

Log in as db2inst1 and run the application /opt/ibm/db2/V9.7_01/bin/db2fs
This will launch DB2 First Step.
Click the Create a New Database
Name it: STSC
Default Directory> I gave it /opt/ibm/db2/data/db2inst1
(***if the path doesn't exist yet you need to create it first.  Change ownership to your db2inst1 user for the install path as well. chown db2inst1.db2iadm1 /opt/ibm/db2/data/db2inst1)
Alias> I left blank
Comment> Sametime System Console
Click Next
Maintenance> "Yes I can specify an offline maintenance window of at least an hour when the database is inaccessible."
Click Next
This is all your choice for when maintenance will occur.  I'm letting mine run on weekends at 3AM and have a window of up to 5 hours to complete.
Timing> Start Time 03:00
Duration 5 hours
Only on selected days: Saturday and Sunday selected
Notification>You will probably already see the contact you created earlier selected.  This is fine for me.
Click Next
Summary>Make everything is to your liking.  You can click on the "Show Command" button to see what is going to run on your system.
Click Finish


As long as all of your directory permissions were set properly by changing owner, you should see some progress on the creation of your new DB.

You will see a window with post installation tasks and a log file.  You will want to record the db2inst1 port number for your System Console install.  My port was 50001.

Install IBM Sametime Standard V8.5.2 System Console (~1hr 20 mins.)
Copy the installer to the server and extract the files.
Switch to the db owner user:
su db2inst1
We are now going to create the System Console Database and Tables
cd ~/sqllib/
. ./db2profile
cd <extracted system console files path>/DatabaseScripts/SystemConsole/
./createSCDb.sh
It should return something like this:

Processing...




   Database Connection Information


 Database server        = DB2/LINUXX8664 9.7.0
 SQL authorization ID   = DB2INST1
 Local database alias   = STSC


*** createSCDb.sh:  skipping granting privileges to self


DB20000I  The SQL command completed successfully.
DB20000I  The SQL command completed successfully.
CREATE SCHEMA SSC
DB20000I  The SQL command completed successfully.


CREATE TABLE  SSC.DEPLOYMENTPLAN ( PLANID VARCHAR(300) NOT NULL, PLANSTATUS VARCHAR(500), PLANDETAIL CLOB, PRIMARY KEY (PLANID) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.PHYSICALNODE ( NODEID VARCHAR(300) NOT NULL, HOSTNAME VARCHAR(300), DEPLOYMENTID  VARCHAR(300), PREREQID  VARCHAR(300), DEPTYPE VARCHAR(500), PLATFORMDETAIL CLOB, GEOID VARCHAR(500), PRIMARY KEY (NODEID) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.PREREQMASTER ( PREREQTYPE VARCHAR(300) , PRODUCTNAME VARCHAR(300) NOT NULL, OFFERING  CLOB, PRIMARY KEY (PRODUCTNAME) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.DEPLOYMENT ( DEPID VARCHAR(500) NOT NULL, PRODUCTTYPE VARCHAR(500), OFFERINGVERSION VARCHAR(500), DEPNAME VARCHAR(500), DPLAN CLOB, DEPNODE VARCHAR(4000), DEPPREREQ CLOB, DEPCONF CLOB, PRODUCTGENID CLOB, DEPSTATUS VARCHAR(100), INSTALLTYPE VARCHAR(100), PRODUCTNAME VARCHAR(500), DEPENDENTPRODUCT CLOB, CLUSTERINFO CLOB, COMPONENTINFO VARCHAR(4000), GEOID VARCHAR(500), DISPLAYVERSION VARCHAR(500), PRIMARY KEY (DEPID) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.PLATFORM ( PLATFORMTYPE VARCHAR(300) NOT NULL, OSTYPE CLOB, PRIMARY KEY (PLATFORMTYPE) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.PREREQDEPLOYMENT ( PREQDEPID VARCHAR(500) NOT NULL, PREQDEPNAME VARCHAR(500), PREQPRODNAME CLOB, PREREQTYPE VARCHAR(500), PRECONFIG CLOB, PRENODE CLOB, PRODUCTDEPLOYMENT CLOB, PARAMID CLOB, CONFIGID CLOB, GEOID VARCHAR(500), PRIMARY KEY (PREQDEPID) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.PREREQMASTERCONFIG ( PREREQTYPE VARCHAR(300) NOT NULL, CONFIGDETAIL CLOB, PRIMARY KEY (PREREQTYPE) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.MASTERDEPLOYMENTPLAN ( DEPPLANID VARCHAR(500) NOT NULL, DEPPLANNAME VARCHAR(500), PLANID VARCHAR(500), PLANNAME VARCHAR(500), PLANDETAIL CLOB, PRIMARY KEY (DEPPLANID) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.PRODUCTOFFERING ( PRODUCTTYPE VARCHAR(500) NOT NULL, PRODUCTNAME VARCHAR(500), OFFERINGDETAIL CLOB, PRIMARY KEY (PRODUCTTYPE) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.PREREQCONFIG ( PARAMID VARCHAR(500) NOT NULL, PARAM CLOB, PRIMARY KEY (PARAMID) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.PRODUCTCONFIG ( PRODUCTGENID VARCHAR(500) NOT NULL, PARAMID VARCHAR(500), CONFIGURATION CLOB, PRODUCTTYPE VARCHAR(500), PRIMARY KEY (PRODUCTGENID) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.CLUSTERDEPLOYMENT ( CLUSTERDEPID VARCHAR(500) NOT NULL, CLUSTERNAME VARCHAR(300), CLUSTERTYPE VARCHAR(200), PRODUCTTYPE VARCHAR(200), CELLNAME VARCHAR(300), NODELIST CLOB, CLUSTERMEMBER CLOB, CLUSTERCONFIG CLOB, CLUSTERSTATUS VARCHAR(500), DMDEPID VARCHAR(500), GEOID VARCHAR(500), CLUSTERVERSION VARCHAR(200), PRIMARY KEY (CLUSTERDEPID) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.SCHEDULEDTASK ( DEPID VARCHAR(500), TASKID VARCHAR(500) NOT NULL, UPDATECONFIG CLOB, TASKTYPE VARCHAR(500), PRIMARY KEY (TASKID) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.GEOGRAPHYDATA ( GEOID VARCHAR(500) NOT NULL, GEONAME VARCHAR(500), PRIMARY KEY (GEOID) )
DB20000I  The SQL command completed successfully.


CREATE TABLE SSC.VERSIONINFO ( ID VARCHAR(300) NOT NULL, PRODUCTTYPE VARCHAR(500), TYPE VARCHAR(300), VERSIONHISTORY  CLOB, CURRENTVERSION  VARCHAR(300), PRIMARY KEY (ID) )
DB20000I  The SQL command completed successfully.


CREATE TABLE POLICY.TEMPLATE ( POLICY_ID VARCHAR(128) NOT NULL, POLICY_LABEL VARCHAR(128) NOT NULL, POLICY_WEIGHT VARCHAR(128), POLICY_PRODUCT VARCHAR(128), POLICY_TYPE VARCHAR(128), POLICY_XML LONG VARCHAR, PRIMARY KEY(POLICY_ID) )
DB20000I  The SQL command completed successfully.


CREATE TABLE POLICY.ASSIGNMENT ( POLICY_ID VARCHAR(128) NOT NULL, POLICY_PRODUCT VARCHAR(128) NOT NULL, USER_ID VARCHAR(128) NOT NULL, IS_GROUP SMALLINT NOT NULL, FOREIGN KEY(POLICY_ID) REFERENCES POLICY.TEMPLATE(POLICY_ID) )
DB20000I  The SQL command completed successfully.


DB20000I  The SQL DISCONNECT command completed successfully.

This will prevent the need for the db2inst1 user from needing admin rights:

db2 connect to STSC
db2 -tf createSchedTable.ddl

Make sure your SELINUX setting is disabled or permissive with /etc/selinux/config.  You will need to restart the server if you make a change.
In GNOME (Graphical environment):
If you are still logged in to GNOME with db2inst1 open a terminal and switch to root:
su -
Launch the System Console Installation:
<path to extracted installation files>/launchpad.sh
You will be prompted to install IBM Installation Manager.  I installed it to /opt/ibm/InstallationManager
Once done you can install System Console.  Accept the license agreements and click Next.
Choose your installation path.  I chose /opt/ibm/SSPShared Click Next.
Choose your installation path for WebSphere.  I chose /opt/ibm/WebSphere Click Next.
WAS Location> "Use Sametime installed Websphere Application Server"
WAS Configuration> Accept the defaults for Cell and Node (unless you know better than me here suggestions are welcome.)  Ensure your FQDN is correct for host name field.  Accept the default user for websphere (wasadmin) and give it a password.
Configure DB2 for the System Console>enter the db2 server FQDN.  Since we're installing everything on the same server enter the same server FQDN you've used in previous steps.  Make sure you enter the port that was displayed in the post-installation steps of your DB2 install.  Mine was port 50001.  Accept the default DB name for the System Console/Policy.
Enter db2inst1 for the application user id and the password you gave this user in previous steps.
Click the Validate button to ensure you can connect to the db2 database properly.
Click Install.  Installation progress will be displayed.  You should be done with no warning or exceptions.
Click Finish.
Test that you can log-in to the System Console

    http://sametime.example.com:8700/ibm/console https://sametime.example.com:8701/ibm/console Use your wasadmin account that was created during the install and password.

For more details refer to:  http://www-10.lotus.com/ldd/stwiki.nsf/dx/Installing_the_console_on_AIX_Linux_Solaris_or_Windows_st852

Install IBM Sametime Standard Community Server V.8.5.2

Install IBM Sametime Standard V8.5.2 Media Manager (~45 mins.)

su - (To login as root.  It will not allow you install if you are not root.)
#cd <extracted Sametime Media Manager files>/SametimeMediaManager/IM/
#./install
Click Install (you can click file>preferences to ensure you have a repository selected)

Choose: I accept the terms in the license agreements
Next
Choose: Use existing WebSphere
Use Lotus Sametime System Console to install
Next
WAS Location>Use existing Location
SSC Login>
System console information
Hostname:foo.bar.com
Use SSL
HTTPS Port:
9443
User ID:
wasadmin
Pssword:
XXXXX
Enter the hostname of this computer:
foo.bar.com
Deployment Plan List>
Choose one we created earlier
Deployment Details>
Details of deployment we created.
Click Next.
Click Install.

Install IBM Sametime Standard V8.5.2 Meeting Server (~1 1/2 hrs)

su - (To login as root.  It will not allow you install if you are not root.)
#cd <extracted Sametime Media Manager files>/SametimeMediaManager/IM/
#./install
Click Install (you can click file>preferences to ensure you have a repository selected)

Choose: I accept the terms in the license agreements
Next
Choose: Use existing WebSphere
Use Lotus Sametime System Console to install
Next
WAS Location>Use existing Location
SSC Login>
System console information
Hostname:foo.bar.com
Use SSL
HTTPS Port:
9443
User ID:
wasadmin
Pssword:
XXXXX
Enter the hostname of this computer:
foo.bar.com
Deployment Plan List>
Choose one we created earlier
Deployment Details>
Details of deployment we created.
Click Next.
Click Install.

Install IBM Sametime Standard V8.5.2 Proxy Server (~1 hr.)

In Sametime System Conole click:
Sametime System Console> Sametime Guided Activies>Install Sametime Proxy Server
Choose "Create a New Deployment Plan"
Click Next
Give it a name
Click Next
Choose Product Version 8.5.2
Click Next
Choose Primary Node
Click Next
Select the System Console
Click Next
Enter the Host name where the Proxy server will be installed
User ID: wasadmin
Password: xxxx
Confirm: xxxx
Click Next
Choose your Sametime COmmunity Server to connect to
Click Next
Click Finish

Log into Gnome as db2isnt1
Open a terminal and log-in as root:
su -
<password>
cd <extracted Proxy server install files>\IM\linux
#./install
Installation Manager will launch
Click Install
Choose Sametime Proxy Server
Click Next
Choose I accept the terms in the license agreements
Click Next
Choose "Use the existing package group"
Click Next
Ensure "Use Lotus Sametime System Console to Install" is selected.
Click Next
WAS Location>
"USe Sametime installed Websphere Application Server"
SSC Login>
Host Name: your Sametime system console server's FQDN
Use SSL selected
HTTPS Port: 9443
User ID: wasadmin
pasword: xxxx
Installation host name: where you're installing the software
Click Validate
Deployment Plan List>
Choose the plan you created in the SSC
Deployment Details>
Review your deployment summary

Click Next
Review the Install Summary
Click Install

Reboot Problems with Services:


  • DB2 database not autostarting (fixed by logging in as dasusr1 and running ~/das/adm/dasauto -on) reboot to test:


# locate db2set
/home/db2inst1/sqllib/adm/db2set
/opt/ibm/db2/V9.7/adm/db2set
/opt/ibm/db2/V9.7/bin/db2setres
/opt/ibm/db2/V9.7_01/adm/db2set
/opt/ibm/db2/V9.7_01/bin/db2setres
/opt/ibm/sametime_installs/wser/db2setup
/opt/ibm/sametime_installs/wser/db2/linuxamd64/install/db2setup
/opt/ibm/sametime_installs/wser/db2/linuxamd64/install/db2setup_exec
/opt/ibm/sametime_installs/wser/nlpack/db2setup
/opt/ibm/sametime_installs/wser/nlpack/db2/linuxamd64/install/db2setup
/opt/ibm/sametime_installs/wser/nlpack/db2/linuxamd64/install/db2setup_exec
# /opt/ibm/db2/V9.7/adm/db2set
DB2SET processing complete, rc = 1302, SQLCODE = 0
# /opt/ibm/db2/V9.7_01/adm/db2set
DB2COMM=tcpip


# ./db2iauto -off db2inst1
# locate db2fmcu
/opt/ibm/db2/V9.7/bin/db2fmcu
/opt/ibm/db2/V9.7_01/bin/db2fmcu
# /opt/ibm/db2/V9.7_01/bin/db2fmcu -d
# /opt/ibm/db2/V9.7_01/bin/db2fmcu -u -p /opt/ibm/db2/V9.7/bin/db2fm
db2fm    db2fmcd  db2fmcu  db2fmd
# /opt/ibm/db2/V9.7_01/bin/db2fmcu -u -p /opt/ibm/db2/V9.7/bin/db2fm
db2fm    db2fmcd  db2fmcu  db2fmd
# /opt/ibm/db2/V9.7_01/bin/db2fmcu -u -p /opt/ibm/db2/V9.7/bin/db2fmc
db2fmcd  db2fmcu
# /opt/ibm/db2/V9.7_01/bin/db2fmcu -u -p /opt/ibm/db2/V9.7/bin/db2fmcd
# ./db2iauto -on db2inst1
# /opt/ibm/db2/V9.7_01/adm/db2set
DB2COMM=tcpip
DB2AUTOSTART=YES
# /opt/ibm/db2/V9.7/adm/db2set
DB2SET processing complete, rc = 1302, SQLCODE = 0




  • System Console websphere app not autostarting although init scripts seem to be in place. Init scripts not working properly.  It turned out the installers do not create node agent scripts as it should.  You need to manually do this by using the wasservice.sh command.  Here's how to do it:


    • For Media Server:

/opt/ibm/WebSphere/AppServer/bin/wasservice.sh -add STMediaServer_NA -serverName nodeagent -

profilePath /opt/ibm/WebSphere/AppServer/profiles/usfr-sameSTMSPNProfile1 -logRoot

/opt/ibm/WebSphere/AppServer/profiles/usfr-sameSTMSPNProfile1/logs/nodeagent -username domino -password

******
chkconfig --list | grep STMediaServer
to ensure you see the services for the appropriate run levels


    • For Meeting Server:

/opt/ibm/WebSphere/AppServer/bin/wasservice.sh -add STMeetingServer_NA -serverName nodeagent -

profilePath /opt/ibm/WebSphere/AppServer/profiles/usfr-sameSTMPNProfile1 -logRoot

/opt/ibm/WebSphere/AppServer/profiles/usfr-sameSTMPNProfile1/logs/nodeagent -username domino -password *****

chkconfig --list | grep STMeetingServer
to ensure you see the services for the appropriate run levels


    • For Proxy Server:

/opt/ibm/WebSphere/AppServer/bin/wasservice.sh -add STProxyServer_NA -serverName nodeagent -

profilePath /opt/ibm/WebSphere/AppServer/profiles/usfr-sameSTPPNProfile1 -logRoot

/opt/ibm/WebSphere/AppServer/profiles/usfr-sameSTPPNProfile1/logs/nodeagent -username domino -password

******

chkconfig --list | grep STProxyServer
to ensure you see the services for the appropriate run levels

Migrating Contact/Buddy Lists from Old Community Server
Start by replicating the vpuserinfo.nsf to your new server from the old Sametime server environment.  If you are still setting up a test environment, then it's a great idea to create a push connection document from the old server to the new server to get any new updates to contact lists in the meantime.

If you are coming from a domino directory style community server (most likely)  you will need to convert the vpuserinfo.nsf into the new LDAP format.  To accomplish this you will need to create a Name Change task on the Sametime server.
...More to come on this soon.

Configuring Meeting Server to be used instead of classic Sametime Meetings
...More to come on this soon


Configuring Proxy Server
Proxy server enables mobile devices such as iOS (iphones) and Android to connect to the Sametime community.
...More to come on this soon


Upgrading to IFR1
Once you've got all of your Sametime Services up and running on 8.5.2 you're done with installations, right?  Nope, there's lot's more work to be done.  8.5.2 has an interim fix release 1 out (IFR1).  No need to worry, all of your upgrades should go smoothly on a fresh install...Not really, but we'll work through all the problems and short-comings.

Gather all the Installation Packages

Here's a listing of almost everything available.
The eAssembly Part is:  CRG85ML
IBM Sametime System Console V8.5.2 IFR 1 Windows, AIX, x86 Linux, Solaris, IBM i Multilingual(CI3Y8ML)
IBM Sametime Community Server V8.5.2 IFR 1 AIX, x86 Linux, Solaris Multilingual(CI3YAML)
IBM Sametime Media Manager V8.5.2 IFR 1 Windows, AIX, x86 Linux, Solaris, IBM i Multilingual(CI3YEML)
IBM Sametime Proxy Server V8.5.2 IFR 1 Windows, AIX, x86 Linux, Solaris, IBM i Multilingual(CI3YCML)
IBM Sametime Meeting Server V8.5.2 IFR 1 Windows, AIX, x86 Linux, Solaris, IBM i Multilingual(CI3YDML)
IBM Sametime Gateway Server V8.5.2 IFR 1 Windows, AIX, x86 Linux, Solaris, IBM i Multilingual(CI3YFML)
IBM Sametime Connect Client V8.5.2 IFR 1 Windows, x86 Linux, Mac Multilingual(CI3YGML)




Thursday, November 17, 2011

Dell KACE: Applying an SSL Certificate to the K1000

KACE's documentation was a little lacking here, so I thought I'd do a quick write up to describe the procedure I followed to successfully apply generate and apply an SSL Certificate.

Generate a CSR
Prior to completing this task, make sure you goto Settings>Network Settings and make sure your Web Server name is in FQDN format.  Example:  K1000.dell.com
Now onto generating the CSR.
Generate a CSR (Certificate Signing Request) by clicking Settings>Security Settings>"Open SSL Certificate Wizard" on the K1000.

You will be presented a web page that has all the typical fields to create a CSR.  When you've filled all these in you will click on the "Set CSR Options" button.  This will generate the CSR on the bottom half of the page.  You will copy the CSR as directed on the page, and apply for a SSL Certificate with a vendor.  My company uses Thawte, so we did it through their Enterprise portal.  I pasted in the CSR with the option ApacheSSL and generated the new certificate.  I'm assuming each vendor will be slightly different, but look for an option called ApacheSSL or something along those lines.

Once your certificate is signed you will need to copy and paste is from your vendor's website to a text file.  You will want it in X.509 format for the KACE to be able to apply it properly.  You can also choose to save it with the file extension x509 or cer, so you know what it is later.

Take a backup of your K1000 prior to applying the SSL certificate.  
Go to Settings>Server Maintenance Tab> Edit Mode
Click on "Run Backup".
This will take about 5 minutes.  When completed and you can reconnect to the K1000, go back to the Settings>Server Maintenance Tab>Edit Mode and download the backup files somewhere safe.
Also make sure SSH is enabled so that KACE can get into the K1000 if you mess up :).

Applying the Certificate
On the K1000 goto Settings>Security Settings>edit mode
On the bottom of the page goto "Set SSL Certificate File:" and click the "Choose File" button.  Select the file you saved the certificate text into and click OK.

You will also need the intermediate certificate for Thawte (may not be true for all vendors.  Refer to their installation instructions to obtain the correct intermediate certificate for your server.)

Under Optional SSL Settings, put checks in only these 2 boxes:
Enable port 80 access
SSL Enabled on port 443

Click the "Set Security Options" button to finalize all the changes.  Your clients will now begin communicating with the server via SSL.  You can now deploy the agents using the SSL option and turn off the Security Settings option to enable port 80 access when you are sure all your agents are connecting via this method if desired.






Friday, October 28, 2011

Domino Extended Directory Catalog - Rebuilding and Configuring

I ran into a problem recently where our corporate Domino Extended Directory Catalog needed some updating and had not been refreshing data properly.  The data was completely stale and hadn't been properly setup to update.

As a starting point I read this document on the IBM site about the EDC: https://www-304.ibm.com/support/docview.wss?uid=swg21093442

This was great for getting a handle on how the EDC is supposed to work, but some of the descriptions are out of date on the configuration documents.

Updating Settings
To update an existing EDC, open up theadministration client.  Cick on the configuration tab, and in the left-hand navigation pane, expand "Directory Cataloger."  You should check that you have entries for the appropriate filename, and times to run the task.  You should also see another entry based on the EDC's Title under the "Directory Cataloger" section.  This is where we'll want to make changes if we need to add new directories to aggregate etc.


Rebuilding
If you need to completely rebuild the EDC, goto the Advanced tab when viewing the Directory Cataloger><EDC TITLE> options.  Click the "Clear History" button.  This will force the EDC to rebuild everything completely when the task is next run.  You can click on the Server tab in the Admin Client and goto the Server Tasks view to see if Directory Cataloger is currently running.  If it's running and you'd like to stop it, on the server console type in "tell dircat quit".  Then to force an immediate rebuild after clearing the history, type in "load dircat <EDC Filename>.nsf".

That should be it.  You should see the file directory rebuilding on the server tasks view, and if inside the file by pressing F9 periodically.  You will then want to manually force replication to other Domino servers holding a replica to get all the changes out into the Environment.

Also you may want to setup directory assistance if you haven't already.  See this document to do so:
http://ksgnotes1.harvard.edu/help/help7_admin.nsf/f4b82fbb75e942a6852566ac0037f284/a4f6cf3dcc1e06ac852570610054b277?OpenDocument