Introduction
It always was a challenge to build cross-platform applications, in order to decrease expenses and increase performance of development. A lot of approaches are used and a lot of libraries/frameworks exist. In the article, I’m not going to reinvent the wheel. I just would like to share my experience and a few tips and problems which I faced while building the application, in the hope that it will be interesting to someone.
The issue was to develop an application which could run on Windows OS and MAC OS. The UI should look modern and in style of both operating systems. An additional requirement was to have a solution of cross-browsers extension.
From my perspective, the easiest way to follow the requirements was to use web technologies.
To develop cross-platform application, I chose to use Html and JavaScript for user interface and business logic. And in order to reach the approach, I decided to use WebBrowser ActiveX control on Windows OS and QT Webkit on MAC OS to run web UI.
The easiest way to build cross-platform extension for various browsers from my perspective was local http/https server as service on Windows OS and daemon on MAC OS.
In the article, I will talk about Windows OS and solution for Windows OS and if somebody will find the article interesting and helpful, I could provide code and introduction for MAC OS.
Contents
The way to develop UI using Web technology was chosen because it can be easily adopted to run it in Web Browser and to build Windows Store application, so it is possible to have one source code to develop portal application and desktop application and all of them will be looked in one style and even they will have one business logic and data model. The modern web browsers allow you to render powerful User Interface. In addition, the source code could be used to develop application for Windows Store with minor changes. In the attached source code, you will find examples of both applications, desktop application and Windows Store application. The example of Web user interface was prepared using React.js JavaScript framework, and styles of Bootstrap v3.3.5. The application itself was built on C#.NET using WebBrowser
control.
Opponents of Internet Explorer can say that it is not a good choice, and that Internet Explorer discredited itself a long time ago. Believe me, it is not so. It is really powerful and a highly customizable control. Probably, that fact that MSDN has a lot of content about WebBrowser
activeX control, sometimes makes it horrible to find something appropriate. And I hope the tips will help to find the right way.
Let’s go. First what needs to be done is to customize the WebBrowser
control (I will not talk here how to create winforms app and how to embed the control to WinForms, everything you can find in the provided source code.)
In real desktop app, some features of WebBrowser
have to be disabled or hidden. For example, you don’t need the standard context menu or you don’t want to show standard error handler, appears when exception occurred in JavaScript code or you don’t want to allow the dragging/dropping content into control or standard keys shortcut. If first three items easily do using WebBrowser
interface, for example:
...ScriptErrorsSuppressed = true;
...IsWebBrowserContextMenuEnabled = false;
...AllowWebBrowserDrop = false;
then to customize or disable standard keys shortcut, you will need to create custom control and inherit it from WebBrowser
control and override PreProcessMessage
method. For example:
public override bool PreProcessMessage(ref Message msg)
{
int num = ((int)msg.WParam) | ((int)Control.ModifierKeys);
if ((msg.Msg != 0x102) && Enum.IsDefined(typeof(LimitedShortcut), (LimitedShortcut)num))
{
return false;
}
return base.PreProcessMessage(ref msg);
}
Where LimitedShortcut
provides set of keys which are allowed.
Also probably, you will want to control the events which can happens at WebBrowser
control and which is not propagated to host by standard events. For example, for window.close()
method, it is useful to have closing event:
protected override void WndProc(ref Message m)
{
switch (m.Msg)
{
case (int)Msg.WM_PARENTNOTIFY:
if (!DesignMode)
{
if (m.WParam.ToInt32() == (int)Msg.WM_DESTROY)
{
Closing(this, EventArgs.Empty);
}
}
DefWndProc(ref m);
break;
default:
base.WndProc(ref m);
break;
}
}
One more trick, how to show modal DialogBox
(for example FolderBrowsDialog
) launched from JavaScript which is running at WebBrowser
control and to prevent of showing error of "long-running script". It can be done by providing implementation of InewWindowManager
interface which manages pop-ups windows launched from WebBrowser
control. It is a kind of hack. At implementation of the interface, you can suppress all pop-up windows including JavaScript engine error message like "long-running script" or of course, you can filter what kind of pop-ups window can appear.
Interop of INewWindowManager
interface:
[ComImport(), ComVisible(true),
Guid("D2BC4C84-3F72-4a52-A604-7BCBF3982CBB"),
InterfaceTypeAttribute(ComInterfaceType.InterfaceIsIUnknown)]
public interface INewWindowManager
{
[return: MarshalAs(UnmanagedType.I4)]
[PreserveSig]
int EvaluateNewWindow(
[In, MarshalAs(UnmanagedType.LPWStr)] string pszUrl,
[In, MarshalAs(UnmanagedType.LPWStr)] string pszName,
[In, MarshalAs(UnmanagedType.LPWStr)] string pszUrlContext,
[In, MarshalAs(UnmanagedType.LPWStr)] string pszFeatures,
[In, MarshalAs(UnmanagedType.Bool)] bool fReplace,
[In, MarshalAs(UnmanagedType.U4)] uint dwFlags,
[In, MarshalAs(UnmanagedType.U4)] uint dwUserActionTime);
}
Implementation of INewWindowManager
interface:
[ComVisible(true)]
[Guid("901C042C-ECB3-4f0f-9BE7-D096AEFD1BDE")]
public class NewWindowManager : INewWindowManager
{
public int EvaluateNewWindow(string pszUrl,
string pszName,
string pszUrlContext,
string pszFeatures,
bool fReplace,
uint dwFlags,
uint dwUserActionTime)
{
return 0;
}
}
In order to override general windows manager, you need to implement custom WebBrowserSite
with implementation of IServiceProvider
interface to provide a generic access mechanism to locate a GUID-identified service. Now, at implementation of public
int QueryService(ref Guid guidService, ref Guid riid, out IntPtr ppvObject)
method, through which a caller specifies the service ID (SID
, a GUID
), the IID
of the interface to return (in our case, it is INewWindowManager
interface). You can provide custom implementation of INewWindowManager
interface through the address of the caller's interface pointer variable (ppvObject
).
Implementation of custom WebBrowserSite
inherited from IServiceProvider
.
protected class WebBrowserSiteExt : WebBrowserSite, IServiceProvider, IDocHostShowUI
{
#region Fields
private Guid _managerId = new Guid("D2BC4C84-3F72-4a52-A604-7BCBF3982CBB");
private readonly NewWindowManager _manager;
#endregion
public WebBrowserSiteExt(WebBrowser host)
: base(host)
{
_manager = new NewWindowManager();
}
#region Implementation of IServiceProvider
public int QueryService(ref Guid guidService, ref Guid riid, out IntPtr ppvObject)
{
if ((guidService == _managerId && riid == _managerId))
{
ppvObject = Marshal.GetComInterfaceForObject(_manager, typeof(INewWindowManager));
if (ppvObject != IntPtr.Zero)
{
return 1;
}
}
ppvObject = IntPtr.Zero;
return -1;
}
#endregion
...
}
Additionally, sometimes it can be useful to inherit your WebBrowserSite
implementation from IDocHostShowUI
to provide your own mechanism of displacing message boxes and Help.
#region IDocHostShowUI Members
public int ShowMessage(IntPtr hwnd, string lpstrText,
string lpstrCaption, uint dwType, string lpstrHelpFile, uint dwHelpContext, ref int lpResult)
{
lpResult = 0;
return 0;
}
public int ShowHelp(IntPtr hwnd, string pszHelpFile, uint uCommand,
uint dwData, tagPOINT ptMouse, object pDispatchObjectHit)
{
return 0;
}
#endregion
The tips above show different approaches how the standard WebBrowser
can be customized. Probably, you will need more customization and I believe you can use the example to create your own.
Next important thing that you will need is external objects accessed by scripting code. It can easily be done by creating your own object and spreading it through ObjectForScripting
property of WebBrowser
control. You can find examples of the objects at ExternalObject
, webConsole
, LocalStorage
and XMLHttpRequest
. ExternalObject
encapsulate all of these objects and provide access to them. Be sure that all of your classes which you are going to pass to JavaScript have Serializable
and ComVisible(true)
attributes.
Here, I would like to share a couple of tricks, which can be interesting.
The first one it is the declaration of methods, for example method open of XMLHttpRequest
has optional arguments, so to expose it into JavaScript, it should be declared so:
public void open(string method, string url, [Optional]object async,
[Optional]object username, [Optional]object password)
The arguments which were declared as [Optional]object
will be marshalling as COM object wrapped in an RCW. So to get the exact value of the type, we need to do something like below:
if (Marshal.IsComObject(prms))
{
IReflect reflect = prms as IReflect;
PropertyInfo[] infos = null;
if (reflect != null)
{
infos = reflect.GetProperties(BindingFlags.GetProperty |
BindingFlags.Instance | BindingFlags.Public);
foreach (PropertyInfo info in infos)
{
return Convert.ToBoolean(info.GetValue(reflect, null));
}
}
}
else if(!(prms is Missing))
{
return Convert.ToBoolean(prms);
}
The RCW has a reference count that is incremented every time a COM interface pointer is mapped to it. So to decrement the reference count, we have to call ReleaseComObject
to allow system to release passed object. So I would recommend to call ReleaseComObject
methods for every object which was passed in to your method like COM object. Here is an example.
protected void ReleaseComObject(object obj)
{
if (Marshal.IsComObject(obj))
{
Marshal.ReleaseComObject(obj);
}
}
Probably, you will ask me why I provided my own implementation of XmlHttpRequest
type. It was done to resolve the cross-domain issue of native XmlHttpRequest
object.
The custom XmlHttpRequest
is built using asynchronous model of .NET Framework to send request and accept response, the approach was chosen because as the practice for every new request will be created new instance of XmlHttpRequest
and usually it is small and fast performed requests so to improve performance it is better to avoid using threads, passing context and in some cases objects of synchronizations.
On the JavaScript side, it is needed to rewrite standard XmlHttpRequest
by custom as follows:
if (window.external && window.external.console)
{
window.console = window.external.console.constructor();
}
if (window.external && window.external.XMLHttpRequest)
{
XMLHttpRequest = function()
{
var xmlRequest = window.external.XMLHttpRequest;
return xmlRequest.constructor().Target;
};
}
Next problem which I’m faced with was a problem how jQuery handles response. I used jQuery to send request from web UI and when I was doing request to server on local host (jQuery used overridden XmlHttpRequest
), the client never received the event. The problem was that the jQuery first sends request and then subscribes on onreadystatechange
event, in case that response comes faster than usual, the client never received the event. So to resolve the issue, I needed to override send
method at jQuery. I used jQuery version 1.10.2. Maybe it is fixed in other versions.
Like this:
(function ()
{
var transport = function (s)
{
if (!s.crossDomain || jQuery.support.cors)
{
var callback;
return {
send: function (headers, complete)
{
var handle, i, xhr = s.xhr();
if (s.username)
{
xhr.open(s.type, s.url, s.async, s.username, s.password);
}
else
{
xhr.open(s.type, s.url, s.async);
}
if (s.xhrFields)
{
for (i in s.xhrFields)
{
xhr[i] = s.xhrFields[i];
}
}
if (s.mimeType && xhr.overrideMimeType)
{
xhr.overrideMimeType(s.mimeType);
}
if (!s.crossDomain && !headers["X-Requested-With"])
{
headers["X-Requested-With"] = "XMLHttpRequest";
}
try
{
for (i in headers)
{
xhr.setRequestHeader(i, headers[i]);
}
}
catch (err) { }
callback = function (_, isAbort)
{
var status, responseHeaders, statusText, responses;
try
{
if (callback && (isAbort || xhr.readyState === 4))
{
callback = undefined;
if (handle)
{
xhr.onreadystatechange = jQuery.noop;
if (xhrOnUnloadAbort)
{
delete xhrCallbacks[handle];
}
}
if (isAbort)
{
if (xhr.readyState !== 4)
{
xhr.abort();
}
}
else
{
responses = {};
status = xhr.status;
responseHeaders = xhr.getAllResponseHeaders();
if (typeof xhr.responseText === "string")
{
responses.text = xhr.responseText;
}
try
{
statusText = xhr.statusText;
}
catch (e)
{
statusText = "";
}
if (!status && s.isLocal && !s.crossDomain)
{
status = responses.text ? 200 : 404;
}
else if (status === 1223)
{
status = 204;
}
}
}
} catch (firefoxAccessException)
{
if (!isAbort)
{
complete(-1, firefoxAccessException);
}
}
if (responses)
{
complete(status, statusText, responses, responseHeaders);
}
};
xhr.onreadystatechange = callback;
xhr.send((s.hasContent && s.data) || null);
},
abort: function ()
{
if (callback)
{
callback(undefined, true);
}
}
};
}
};
jQuery.ajaxTransport('script', transport);
jQuery.ajaxTransport('text', transport);
})();
Next tip, it is how to propagate events to JavaScript. The events we can emit to script by using reflection, every methods at JavaScript are objects so for providing callback to JavaScript, we need to store function (object) at our events emitter, and when the time comes, just do like the example below:
System.Type t = callback.GetType();
try
{
List<object> args = new List<object>();
if (state != null)
{
foreach (object arg in state)
{
args.Add(arg);
}
}
t.InvokeMember("", System.Reflection.BindingFlags.InvokeMethod,
null, callback, args.ToArray());
}
catch (Exception e){…}</object></object>
You can find the source code at ScriptEventDelegate
class and example how to use at external objects.
At this topic, I covered more-less all problems which I'm faced with during creation application with web UI. Next, I would like to share my experience with “browser extensions”.
At the topic, I would share my experience how to build cross-WebBrowser extension in a simple manner. If to be honest, it is not exactly extension, it is just local http/https server, which handles request from web applications launched at WebBrowser
. But the solution works if you have web application running at browser and would like to perform an action out of browser, for example by user event launch desktop application or call a system method and other….
So first, it is about how to create https server with self-signed certificate to be able to make request from web application that runs on http or on https as well. To create self-signed certificate for localhost. It requires 3 steps:
- Create root certificate
makecert.exe -r -pe -n "CN=<cert name>" -ss CA -sr LocalMachine -a sha1 -sky signature
-cy authority -sv CA.pvk CA.cer
- Create server authentication certificate for localhost with root certificate generated on first step
makecert.exe -pe -n "CN=localhost" -a sha1 -sky exchange -eku 1.3.6.1.5.5.7.3.1
-ic CA.cer -iv CA.pvk -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12
-sv server.pvk server.cer
- Export pvk container to pfx
pvk2pfx.exe -pvk server.pvk -spc server.cer -pfx server.pfx
You can find more detailed information if you follow the link below:
Now you will need to install them to system certificate store, and of course you want to do that at application installer and would like to have smooth user experience of the process. To have it working for Internet Explorer, Chrome and Safari, you have to install it at:
- The X.509 certificate store for trusted root certificate authorities (CAs)
- The X.509 certificate store for intermediate certificate authorities (CAs)
- The X.509 certificate store for personal certificates
under certificate store assigned to the Local Machine. Your installer has to have administrative rights. But further, the certificate will be available for any account on local machine.
You can find how to install certificate and how to associate them with localhost on a specific port at MSDN
https://msdn.microsoft.com/en-us/library/windows/desktop/aa364503(v=vs.85).aspx
https://msdn.microsoft.com/en-us/library/windows/desktop/aa364649(v=vs.85).aspx.
In my source code, you can find wrapper of win32 methods and examples of how to use them.
SSLCertInstaller
wraps of win32 methods to install certificate and to bind him on localhost for specific port.
Here is an example of how to use the wrapper:
using (SSLCertInstaller installer = new SSLCertInstaller(StoreName.Root, StoreLocation.LocalMachine))
{
installer.InstallCertificate(new X509Certificate2(
Path.Combine((new FileInfo(Assembly.GetExecutingAssembly().Location)).DirectoryName,
_cacert), "",
X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.MachineKeySet |
X509KeyStorageFlags.Exportable));
installer.InstallCertificate(new X509Certificate2
(Path.Combine((new FileInfo(Assembly.GetExecutingAssembly().Location)).DirectoryName,
_percert), "",
X509KeyStorageFlags.PersistKeySet |
X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.Exportable));
}
using (SSLCertInstaller installer =
new SSLCertInstaller(StoreName.CertificateAuthority, StoreLocation.LocalMachine))
{
installer.InstallCertificate(new X509Certificate2(
Path.Combine((new FileInfo
(Assembly.GetExecutingAssembly().Location)).DirectoryName, _cacert), "",
X509KeyStorageFlags.PersistKeySet |
X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.Exportable));
}
using (SSLCertInstaller installer =
new SSLCertInstaller(StoreName.My, StoreLocation.LocalMachine))
{
installer.InstallCertificate(new X509Certificate2(
Path.Combine((new FileInfo
(Assembly.GetExecutingAssembly().Location)).DirectoryName, _percert), "",
X509KeyStorageFlags.PersistKeySet |
X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.Exportable));
installer.GrantAccess(_domainname);
int port = Win32API.GetNextVacantPort
(Utils.Properties.Settings.Default.DEF_HTTPS_PORT, Utils.Properties.Settings.Default.ATTEMPTS_COUNT);
Helpers.WritePortToFile(port);
installer. AssociateCertificate(_domainname, _ipaddress, port, true);
}
Everything works according to MSDN. Probably, I would like to highlight one thing, it is necessary to change access rights of private key file that it could be accessible for any authorized users account on local machine. Below is the method which is doing it:
public void GrantAccess(string CN)
{
X509Certificate2Collection certificate =
store.Certificates.Find(X509FindType.FindBySubjectName, CN, false);
if (certificate.Count == 0)
{
throw new NotFoundException(string.Format
("Certificate {0} is not found at certificate store.", CN));
}
RSACryptoServiceProvider rsa = certificate[0].PrivateKey as RSACryptoServiceProvider;
if (rsa != null)
{
string keyfilepath =
FindKeyLocation(rsa.CspKeyContainerInfo.UniqueKeyContainerName);
FileInfo file = new FileInfo(keyfilepath + "\\" +
rsa.CspKeyContainerInfo.UniqueKeyContainerName);
FileSecurity fs = file.GetAccessControl();
fs.AddAccessRule(new FileSystemAccessRule
(new SecurityIdentifier(WellKnownSidType.AuthenticatedUserSid, null),
FileSystemRights.FullControl, AccessControlType.Allow));
file.SetAccessControl(fs);
}
_isAccessed = true;
}
We installed certificate for Internet Explorer, Chrome and Safari so all of them are using one system certificate store, but unfortunately FF and Opera uses own stores. I found a way how to install the certificate to FF store but I didn't find a way for Opera and I will appreciate if somebody could share his experience if he/she had it.
For FF I used NSS Tools certutil to update certificate store.
Self-Signed certificate created and installed to system, so now we can create http/https server running on localhost. .NET Framework has a rich hierarchy of classes to do that. You can use any, I chose a high-end class like HttpListener
. The server can be created in a few lines of code using the class.
Here is an example:
_worker = new Thread(() =>
{
ServiceSettings.InitializeLog(GlobalUtils.Properties.Settings.Default.SERVICE_NAME);
WriteInfoToLog("Initializing...");
_config = new ServiceSettings();
_handlersManager.Load(Path.Combine(_config.HTTPDir, "handlers"));
_rmService = new RemoteServer(_config.HTTPPort, _config.HTTPSPort);
WriteInfoToLog("Starting a work");
_locker.IsLocked = false;
_stopServer.Reset();
using (HttpListener listener = new HttpListener())
{
try
{
WriteInfoToLog("Listening On secure port: " + _rmService.HttpsPort.ToString());
WriteInfoToLog("Listening On port: " + _rmService.HttpPort.ToString());
listener.Prefixes.Add("https://+:" + _rmService.HttpsPort.ToString() + "/");
listener.Prefixes.Add("http://+:" + _rmService.HttpPort.ToString() + "/");
listener.Start();
}
catch (Exception e)
{
WriteErrorToLog("Exception at HTTPServer.Listen:
" + e.Message + "\r\n" + e.StackTrace);
}
WriteInfoToLog("Waiting for connection...");
List<waithandle> handles = new List<waithandle>();
handles.Add(_stopServer);
try
{
_rmService.Start();
}
catch (Exception ex)
{
WriteErrorToLog("Exception at HTTPServer.Listen
when Remote Http Server starting: " + ex.Message + "\r\n" + ex.StackTrace);
}
WriteInfoToLog("Started a work");
while (listener.IsListening)
{
try
{
IAsyncResult result = listener.BeginGetContext(DoAcceptTcpClientCallback, listener);
handles.Add(result.AsyncWaitHandle);
WaitHandle.WaitAny(handles.ToArray());
handles.Remove(result.AsyncWaitHandle);
result.AsyncWaitHandle.Close();
if (_stopServer.WaitOne(0, true))
{
listener.Stop();
return;
}
_locker.Check();
}
catch (Exception ex)
{
if (listener != null)
{
listener.Stop();
}
WriteErrorToLog(ex.StackTrace);
}
}
WriteInfoToLog("Stopped a work");
}
});
_worker.Start();
Where the synchronization object ManualResetEvent _stopServer
is used to abort listening on server stop.
The server uses asynchronous model of .NET Framework like XmlHttpRequest
as described above. And all operations writing to file (on file uploading) and reading from file as well are asynchronous. I think avoiding to use threads on short operations makes the server more memory efficient. I would say it is critical if to use server as windows service.
Also if you are using the server as windows service, I would recommend to start a real work at working thread just to allow initializing and starting service as fast as possible. One more tip, probably it is a good idea to do the service dependent on for example “RpcSs
” service. Not sure that is the right choice, but the problem which I tried to resolve using the approach was to delay service until other system services will be loaded from which my service can depend on.
The server is pretty simple and it is just handling static content, but I added the possibility to extend it, by implementing something similar to fast CGI. Just what needs to be done, it is implementing IHandler
interface at your library and put to specific place. The system will load it automatically on server initializing.
public interface IHandler : IDisposable
{
string Name { get; }
void SetEnvironment(IHandlerEnvironment env);
bool IsSupported(IHttpContext context);
void Process(IHttpContext context);
}
Example of handler you can find at HandlerExample1
project.
One thing which I forgot to say it is ports of server. As I said above, you need to assign the certificate to localhost on specific port which you are going to use for listening. How to share port? In my example, I have a simple file, but more efficient way I guess is registry. Now the port should be used by clients, desktop clients and web applications. If for desktop it is pretty straightforward, to share port we could use for example Remoting (example of this, you can find at code of http server and client), then for web application running at browser, it will not work. So for these kinds of applications, we could select a range of ports which can be used (range because some of them can be unavailable) and at web application find local server just simple iteration by them.
I've added the changes to improve a improve performance of request handling.
Previously, the read operation and write of the data into the output stream have not been performed in parallel on processing of each request for example getting a static resource (such as image). In the case of a small network bandwidth it could be noticeable.
The changes allows to perform reading of source and writing to output stream in parallel, the solution is resolved through queue of input and output data. The new buffer of data will be added to queue on reading and the buffer will be withdrawn from queue when output stream is ready to write new chunk of data. The changes allows to start a few reading operations in parallel.
At attached source code, you will find three solutions; .\_build\ AppJSRunnerAndServer.sln, .\_build\ AppJSRunnerAndService.sln for Visual Studio 2008 and .\_build\WinStoreApp.sln for Visual Studio 2012.
First two solutions encapsulate projects to create client and http/https service. AppJSRunnerAndServer.sln solutions create just desktop application of client and desktop application of http/https server.
AppJSRunnerAndService.sln solution is doing similar jobs except it creates http/https service instead of desktop http/https server application.
Both of them are using one source code. When solutions are built successfully, you will find folder .\ _debug or .\_release, depends on what configuration you selected. At the folder, simply run one of the files start_http_server.bat if you built AppJSRunnerAndServer.sln solution or register_http_service.bat if you built AppJSRunnerAndService.sln. and then run webclient.exe. If everything were built correctly, you will see application with web UI. The application creates icon at system tray. And so it is just an example, it almost does nothing. You can drag window for title, pin it at desktop, navigate through tabs and launch FolderBrowserDialog
. But you can replace the UI and logic. Just probably look how the dragging handled, especially what used to determine what is title.
The web project itself you can find at directory .\sources\JsSource\, Just to see how it works at web browser for example Internet Explorer or Chrome or FF, just launch http/https service and at browser, navigate to "http://localhost:8081/index.html" or "https://localhost:8190/index.html". One notice, to have it is running through https, you will need to build application as service and then launch register_http_service.bat. For local server, the certificate has to be installed for local user so to make https work for server, you need to change certificate installer a bit.
And finally, I will appreciate any comments and improvements.
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.